javascript,dom,optimization,asynchronous,requestanimationframe

Both types of DOM operations (read/write) have their own job queues. Each queue is flushed (e.g. all jobs in it are ran/executed) every requestAnimationFrame If you add 100 read operations all within 5ms, for example, (during a loop for instance), all of those read operations will (most likely) occur the...

python,python-2.7,optimization,matrix

In terms of refactoring, you almost certainly want to avoid "brute force" in the sense of having to explicitly type out all your cases :) @alexmcf addresses this above. In conceptual terms, your approach follows the problem statement directly: check all neighbors for each number in the matrix and sum...

java,variables,optimization,value

the cost of calling the method isEmpty() (allocating new space in the thread stack etc) negate any gains. if you want to assign an empty String to the variable, its most efiicient to do so without the if statement.

You can try that out yourself using php's microtime function: $saleprice = 1; $start = microtime(true); if(!empty($saleprice)) echo $saleprice . '<br/>'; echo 'empty: ' . number_format(( microtime(true) - $start), 30) . '<br/>'; $start = microtime(true); if(!is_null($saleprice)) echo $saleprice . '<br/>'; echo 'is_null: ' . number_format(( microtime(true) - $start), 30) ....

They do where it makes sense, typically not for the actual type but for an interface (for example they check for IList when it helps speedup)

javascript,arrays,optimization,memory-management,data-structures

splice is pretty harmful for performance in a loop. But you don't seem to need mutations on the input arrays anyway - you are creating new ones and overwrite the previous values. Just do function doTransfers() { var A_pending = []; var B2_pending = []; for (var i = 0;...

It turns out the delay is the Microsoft antivirus program scanning the executable each time it's run. Disabling protection on that file cuts the time to 47 milliseconds.

c,performance,opencv,optimization,sse

As harold said, delta is used to make unsigned comparsion. Let's describe this implementation by steps: __m128i x0 = _mm_sub_epi8(_mm_loadu_si128((const __m128i*)(ptr + pixel[0])), delta); __m128i x1 = _mm_sub_epi8(_mm_loadu_si128((const __m128i*)(ptr + pixel[4])), delta); __m128i x2 = _mm_sub_epi8(_mm_loadu_si128((const __m128i*)(ptr + pixel[8])), delta); __m128i x3 = _mm_sub_epi8(_mm_loadu_si128((const __m128i*)(ptr + pixel[12])), delta); m0 =...

Removing the viewBox creates a significant semantic difference as the SVG will no longer scale (i.e. be responsive to UA resizes). This only applies if you're viewing the image directly though if you're viewing it as a background-image or via a SVG <image> tag or html <img> tag then the...

c#,xml,csv,optimization,type-conversion

IEnumerable<string> values = new List<string>(); values = … Probably not going to be a big deal, but why create a new List<string>() just to throw it away. Replace this with either: IEnumerable<string> values; values = … If you need values defined in a previous scope, or else just: Enumerable<string> values...

mysql,optimization,query-optimization

It is not possible to optimize a mixture of ASC and DESC, as in ORDER BY t.fecha DESC, t.idsensor ASC You tried a covering index: INDEX `sensor_temp` (`idsensor`,`fecha`,`temperatura`) However, this covering index may be better: INDEX `sensor_temp` (`fecha`,`idsensor`,`temperatura`) Then, if you are willing to get the sensors in a different...

python,arrays,optimization,multidimensional-array,list-comprehension

You need a different data structure for fast lookups: dict rather than list. Here's an example: array1 = [ [[1,2], [1,5]], [[3,2], [7,5]], ] array2 = [ [[3,2], [9,9]], [[1,2], [1,5]], ] lookup = {} for r, row in enumerate(array1): for c, val in enumerate(row): pair = tuple(val) lookup[pair] =...

Well, obviously the compiler isn't 'smart' enough to propagate the n constant and unroll the for loop. Actually it plays it safe since arg->n can change between instantiation and usage. In order to have consistent performance across compiler generations and squeeze the maximum out of your code, do the unrolling...

Here is an answer. First remove all the branches and then implement as SSE. I haven't check the speed yet. const int T1 = 17560; const int T2 = 244583; const int SHIFT = 16; int A = 2*((dx ^ dy) >= 0)-1; //check (dy,dx) opposite sign dy = abs(dy);...

performance,matlab,function,optimization,plot

The cost is coming from the calls to hermiteH -- for every call, this creates a new function using symbolic variables, then evaluates the function at your input. The key to speeding this up is to pre-compute the hermite polynomial functions then evaluate those rather than create them from scratch...

c,optimization,binary-search,linear-search

You should look at the generated instructions to see (gcc -S source.c), but generally it comes down to these three: 1) N is too small. If you only have a 8 different branches, you execute an average of 4 checks (assuming equally probable cases, otherwise it could be even faster)....

java,optimization,machine-learning,scipy,stanford-nlp

What you have should be just fine. (Have you actually had any problems with it?) Setting termination both on max iterations and max function evaluations is probably overkill, so you might omit the last argument to qn.minimize(), but it seems from the documentation that scipy does use both with a...

You have some good answers already that answer your factual question: No, the C# compiler does not generate the code to do a single multiplication by 86. It generates a multiplication by 43 and a multiplication by 2. There are some subtleties here that no one has gone into though....

So, memcached is better and faster! BUT: memcached can remove old keys, if ttl expires or small size for sets other keys. The situation may be: In first request you set key-value to memcached. In second request (parallel) you set another key value to memcached, and in memcached small memory...

Determinstic variant First a more efficient but deterministic approach: occurs_most([],_,0). occurs_most(List,X,Nr) :- msort(List,[H|T]), most_sort(T,H,1,H,1,X,Nr). most_sort([Hb|T],Ha,Na,Hb,Nb,Hr,Nr) :- !, Nb1 is Nb+1, most_sort(T,Ha,Na,Hb,Nb1,Hr,Nr). most_sort([Hc|T],_,Na,Hb,Nb,Hr,Nr) :- Nb > Na, !, most_sort(T,Hb,Nb,Hc,1,Hr,Nr). most_sort([Hc|T],Ha,Na,_,_,Hr,Nr) :- most_sort(T,Ha,Na,Hc,1,Hr,Nr). most_sort([],Ha,Na,_,Nb,Ha,Na) :- Na >= Nb, !. most_sort([],_,_,Hb,Nb,Hb,Nb). First you use msort/2 to sort the list. Then you iterate over...

python,optimization,distribution,cx-freeze,pyglet

You can use the same -O flag when you run cx_freeze to generate your final build, meaning that the cx_freeze generated bytecode will already be optimized. From the cxfreeze docs: cxfreeze hello.py --target-dir dist Further customization can be done using the following options: ... -O optimize generated bytecode as per...

javascript,jquery,html,css,optimization

This looks great, and I have only a minor suggestion. Change your code as follows: for (j = 0; rule = rules[j]; j++) { var styles = rule.style, style, k; var elements = document.querySelectorAll(rule.selectorText); if(elements.length) { for(k = 0; style = styles[k]; k++) { ... } console.log(rule.cssText); } } This...

Since you mentioned "evolution strategy" in your comment to @Divkar, I am assuming you want to optmize the ackley function using any evolutionary algorithm. In fact, if this is the case and if you are familiar with the Particle Swarm Optimization (PSO) , there is a submission (among many) in...

java,algorithm,search,optimization,simulated-annealing

So you are trying to find an n-dimensional point P' that is "randomly" near another n-dimensional point P; for example, at distance T. (Since this is simulated annealing, I assume that you will be decrementing T once in a while). This could work: double[] displacement(double t, int dimension, Random r)...

(GREATEST and COALESCE are not relevant to the real question.) It may or may not be faster. Here's the logic... OR usually eliminates using an index. SELECT ... ORDER BY indexed_column LIMIT 1 may be able to use indexed_column to find the 1 row you want with no extra effort....

There are a few problems with your overall process that cause trouble. First of all as @user227710 mentions in the comments you should replace && with &. These have different meanings. Now for the optimiser It looks like you want to set limits to your parameters (i.e. what is known...

algorithm,optimization,bin-packing

If the problem under consideration is the generalized assignment problem, it is NP-hard but admits an approximation algorithm. From a brief look, the approximation ratio depends on the approximation ratio of an approximation algorithm for the knapsack problem, which in turn admits a fully polynomial time approximation scheme. In total....

javascript,jquery,performance,function,optimization

Using your HTML with only two buttons and identical CSS The following JS works for better for me: Javascript: var newsDepth = 3; var maxNewsDepth = 3; $('#show-more-news').hide(); $('#show-more-news').click( function () { (newsDepth < maxNewsDepth ) && newsDepth++; $('#art-' + newsDepth).show(); $('#show-less-news').show(); if (newsDepth == maxNewsDepth) $('#show-more-news').hide(); }); $('#show-less-news').click( function...

optimization,jvm-hotspot,jrockit

You can create an opt file, see JRockit R28: http://docs.oracle.com/cd/E15289_01/doc.40/e15059/crash.htm#BABJGICB Earlier releases: https://docs.oracle.com/cd/E13150_01/jrockit_jvm/jrockit/geninfo/diagnos/crash.html#wp1010461...

By default Z3 solves the objectives one at a time and finds the lexicographically best solution. First it tries to satisfy as many soft constraints from "first". The weight you associate with the soft constraints is a penalty for not satisfying the constraint. That is, it is not an award,...

r,function,optimization,mathematical-optimization

I think you want to minimize the square of a-fptotal ... ff <- function(x) myfun(x)^2 > optimize(ff,lower=0,upper=30000) $minimum [1] 28356.39 $objective [1] 1.323489e-23 Or find the root (i.e. where myfun(x)==0): uniroot(myfun,interval=c(0,30000)) $root [1] 28356.39 $f.root [1] 1.482476e-08 $iter [1] 4 $init.it [1] NA $estim.prec [1] 6.103517e-05 ...

Here is a JMH benchmark: @OutputTimeUnit(TimeUnit.SECONDS) @BenchmarkMode({ Mode.Throughput }) @Warmup(iterations = 10) @Fork(value = 1) @State(Scope.Benchmark) public class MyBenchmark { private static final double CONSTANT = 1.6712 * 1000 * 60; private double x = 0; @Benchmark public void testCaseOne() { for (double i = 1; i < 1000_000; i++)...

php,apache,symfony2,optimization,wildcard-subdomain

You could cache all each of the subdomain requests (using something like Doctrine cache as a wrapper for whatever caching system you use) so that each subsequent check would only need to check the cache rather than the database. Also when adding/removing/updating your subdomain object you could update the cache...

python,opencv,numpy,optimization,cython

Use scipy.spatial.distance.cdist for the distance calculation in points_distance. First, optimize your code in pure Python and numpy. Then if necessary port the critical parts to Cython. Since a number of functions are called repeatedly a few ~100000 times, you should get some speed up from Cython for those parts. Unless,...

Try looking at it as a bipartite graph, trying to maximize the flow: Order the distances between the cities and the labs from closest to farthest, then iterate over the list and: Move x samples from the city to the lab - where x=min(max_lab_capacity, number_of_samples). The edge between the city...

c++,performance,optimization,x86,prefetch

If you read the description for _mm_prefetch() from the site you linked to it has : void _mm_prefetch (char const* p, int i) Fetch the line of data from memory that contains address p to a location in the cache heirarchy specified by the locality hint i. So you need...

To answer your question, it is NOT valid to put the link tag outside of the html tag Use https://validator.w3.org/#validate_by_input+with_options and see for yourself. Just paste the code in Line 11, Column 40: Stray start tag link. Then you get Line 11, Column 40: Cannot recover after last error. Any...

c#,image-processing,optimization,winrt-xaml

You are currently awaiting file.GetThumbnailAsync for each file which means that although the function is executed asynchronously for each file, it is executed in order, not in parallel. Try converting each async operation returned from file.GetThumbnailAsyncto a Task and then storing those in a list and then await all tasks...

matlab,optimization,vectorization

You can do that completely vectorized with logical operators. You can essentially replace that code with: function bottleradius = aux_bottle_radius(z_array) %// Declare initial output array of all zeroes bottleradius = zeros(size(z_array)); %// Condition #1 - Check for all values < -30e-3 and set accordingly bottleradius(z_array < -30e-3) = 34e-3; %//...

javascript,jquery,optimization

Rename f() to something else that makes more sense to your domain: var f = function(indicatorClicked, remainingIndicators) { $(indicatorClicked).toggleClass("icon-caret-up icon-caret-down"); $.each(remainingIndicators, function(index, indicator) { $(indicator).removeClass("icon-caret-up").addClass("icon-caret-down"); }); } $("#IDArea1").click(function () { f('#indicator1', ['#indicator2', '#indicator3']) }); $("#IDArea2").click(function () { f('#indicator2', ['#indicator1', '#indicator3']) }); $("#IDArea3").click(function () {...

Like I said in my comment above, perhaps it would be best if you simply tried it and did some benchmarking. I'd expect this to depend primarily on the OS you're using. That being said, starting a new process generally is many orders of magnitude slower than calling a subroutine...

Way 2 is better because it gives you a unified way of setting a variable value. But it brings in a risk, because you're calling an overrideable method in a constructor. So the right syntax is using the final keyword: public final void setField(int field){ this.field = field; } //way...

c,performance,optimization,alphablending,lookup-tables

Table lookup is not a panacea. It helps when the table is small enough, but in your case the table is very big. You write 16 megabytes used for the table is not an issue in this case which I think is very wrong, and is possibly the source of...

It is not a convex problem, hence it cannot be solved directly with the tools you mention. It is inherently combinatorial, in the sense that you can enumerate cases based on the sign of the two variables, and for every such case, a linear equality holds. Hence, you can either...

java,algorithm,optimization,foreach,bubble-sort

Using indexOf is not correct, if your ArrayList x contains more than one value of i, your sort method will behave incorrectly. For example, this case: [5, 5, 1] After running your program, return [5, 5, 1] Plus, using indexOf, as mentioned by tucuxi, will slow down your program....

algorithm,optimization,numerical-methods

For this setting, I'd go with Golden Section Search. Convexity implies unimodality, which is needed by this method. Conversely, this method does not need derivatives. You can find derivatives numerically, but that's another way of saying "multiple function assessments"; might as well use these for golden-section partitions. ...

There isn't much more to it. For soft constraints the number on the right of "|->" is given as follows. Suppose we assert (assert-soft F1 :weight w1 :id first) (assert-soft F2 :weight w2 :id first) (assert-soft F3 :weight w3 :id first) And suppose M is a model maximal assignment that...

java,optimization,jvm,jvm-hotspot

The fact that what you see is the result of some JIT optimization should be clear by now looking at all the comments you received. But what is really happening and why that code is optimized always nearly after the same amount of iterations of the outer for? I'll try...

r,optimization,circular,maximization

I would compute all the pairs of rows in df: (pairs <- cbind(1:nrow(df), c(2:nrow(df), 1))) # [,1] [,2] # [1,] 1 2 # [2,] 2 3 # [3,] 3 4 # [4,] 4 5 # [5,] 5 6 # [6,] 6 1 You can find the best pairing with which.max:...

networking,optimization,ssh,ansible

If you are working in a network where the connection might be lost 'during' execution of a play/task then I'm not sure if (read: I don't think) ansible saves the context of execution so often as to recover from such issues. If your network is bad, you should fix that....

javascript,arrays,optimization,variable-declaration

Both are the same. for loops don't create a new scope so the variables are hoisted to the top of the containing function scope. ex: > i undefined > for (var i = 0; i < 5; i++) { var test = i; } undefined > i 5 > test...

No (technically there could be a difference, since OpenGL does not impose any performance requirements on the functioncall). Btw. you should set the clear color before calling clear....

You should rather use the optimized cdist from scipy.spatial which is more efficient than calculating it with numpy, from scipy.spatial.distance import cdist dist = cdist(data, C, metric='euclidean') dist_idx = np.argmin(dist, axis=1) An even more elegant solution is to use scipy.spatial.cKDTree (as pointed out by @Saullo Castro in comments), which could...

If you want only the most recent comment for a given user, then you can express this as: SELECT rc.* FROM requests r JOIN requests_comments rc ON rc.request_id = r.id WHERE r.username = 'someuser' ORDER BY rc.commented_at DEC LIMIT 1; For performance, you want an index on requests(username, id) and...

algorithm,optimization,dynamic-programming,frequency

There is no need for Dynamic Programming for this problem. It is a simple sorting problem, with a slight twist. Find frequency / length for each file. This gets you, how frequent is each unit length access for the file. Sort these in descending order since each file is "penalized"...

c,optimization,clang,inline,c99

At -O1 and greater it is not calling the function it just moves the code into main, we can see this by using godbolt which shows the asm as follows see it live: main: # @main pushq %rax movl $.L.str, %edi movl $42, %esi xorl %eax, %eax callq printf xorl...

Your function is NOT convex, therefore you will have multiple local/global minima or maxima. For your function I would run a non traditional/ derivative free global optimizer like simulated annealing or genetic algorithm and use the output as a starting point for BFGS or any other local optimizers to get...

How about you use a switch/case structure to determine which function to call and what value of y to use? returnType (func*)(); // Create a function pointer and use that switch(userInput){ case 0: func = &func1; y = 5; case 1: func = &func2; y = 6; } To simplify...

while((j < nprimos) && (num % primos[j] > 0)) ++j; One way for this while loop to terminate is when j == nprimos (when i == 0 and therefore num == 1, for instance; obviously 1 isn't divisible by any prime number.) In this case the access to primos[j] in...

If you want to create unique relationships you have 2 options: Prevent the path from being duplicated, using MERGE, just like @user2194039 suggested. I think this is the simplest, and best approach you can take. Turn your relationship into a node, and create an unique constraint on it. But it's...

The % gap is the relative difference between the best solution and the best known bound for a solution. This is guaranteed to be higher than the gap between the best known solution and the true global optimum. In your case, the best bound is 83.9275, the best bound is...

mysql,sql,table,optimization,scan

You could try a self join like this: SELECT COUNT(DISTINCT sw1.word_id) FROM sentence_word sw1 JOIN sentence_word sw2 ON ( sw1.sent_id = sw2.sent_id AND sw2.word_id = [your word id] ) WHERE sw1.word_id != [your word id] or perhaps even better SELECT COUNT(DISTINCT sw1.word_id) FROM sentence_word sw1 JOIN sentence_word sw2 ON (...

python,numpy,optimization,fortran,f2py

The flags -xhost -openmp -fp-model strict come from def get_flags_opt(self): return ['-xhost -openmp -fp-model strict'] in the file site-packages/numpy/distutils/fcompiler/intel.py for the classes that invoke ifort. You have two options to modify the behavior of these flags: call f2py with the --noopt flag to suppress these flags call f2py with the...

sql-server,join,optimization,union

Well first of all, you can use UNION ALL, which will leave out the duplication check. This will be much faster. Another suggestion is the IN() in your first SELECT. This could cause long runtimes if your subquery returns many rows. I would suggest to redesign it from this: select...

c#,string,algorithm,optimization,language-agnostic

I decided to add this answer not because it is the optimal solution to your problem, but to illustrate two possible solutions that are relatively simple and that are somewhat in line with the approach you seem to be following yourself. The (non-optimized) sample below provides an extremely simple prefix...

Store all elements in single dimension array first, in your case this will be look like: $array = array('a','b','c','d','e','f'); Then use php in built function in_array() to check whether $col exists in array, in your this looks like: in_array($col, $array); Entire code: $array = array('a','b','c','d','e','f'); if(in_array($col, $array)) { continue; }...

java,algorithm,sorting,optimization

This is an instance of the Knapsack Problem. The Wikipedia page lists a lot of known algorithm that solve this efficiently. You might want to gather some inspiration from these algorithms.

I am not sure what you exactly want to do, but are you aware of scipy.ndimage.measurements for computing on arrays with labels? It look like you want something like: cLoss = len(dist_) - sum(TLabels * scipy.ndimage.measurements.sum(TLabels,dist_,dist_) / len(dist_)) ...

string,delphi,optimization,pchar

Let's consider the corner cases. I think they are: AInput invalid. AStart < 1. AStart > FLength. ASubstringLength < 0. ASubstringLength + (AStart-1) > FLength. We can ignore case 1 in my opinion. The onus should be on the caller to provide a valid PChar. Indeed your check that AInput...

The problem was that I was installing the scip-3.1.1.tgz rather than scipoptsuite-3.1.1.tgz. Running make on scipoptsuite-3.1.1.tgz runs perfectly fine.

every element of the vector L must be less than or equal to 1. This should be written as a set of constraints, not a single constraint. Artificially bundling the constraints L(1)<=1, L(2)<=1, ... into one constraint is just going to cause more pain to the solver. Example with...

c++,optimization,visual-studio-2013,ole,office-automation

In this code: if (cmd==DISPATCH_PROPERTYPUT) { DISPID dispidNamed=DISPID_PROPERTYPUT; /* <--- PROBLEM LINE here */ dispParams.cNamedArgs=1; dispParams.rgdispidNamedArgs=&dispidNamed; } dispidNamed is a local variable to the code block it is in (i.e. the area delimited by { }). After the } is reached it ceases to exist. Then rgdispidNamedArgs is a dangling...

java,optimization,out-of-memory,primes

There are two possibilities: You use -Xmx256M which means a 256 MB heap. But there's more than just the heap and your VM may get killed when it tries to get more. You give 256 MB to your VM but your program needs more and gets killed. <---- As RealSkeptic...

c,linux,optimization,shared-libraries,glibc

Setting the environment variable LD_BIND_NOW should help achieving just that. Set it with export LD_BIND_NOW=1 then execute your program. Excerpt: ELF platforms (Linux, Solaris, FreeBSD, HP-UX, IRIX, etc.) support lazy binding of procedure addresses, which is an optimization that yields better performance overall but a genuine problem for applications that...

You don't need a (Linked)HashMap for this job. Use a NavigableMap (TreeMap would be the standard implementation) that provides a lot of useful operations for such use-cases: floor/ceiling, higher/lower, first/last and so on. // get the value at this specific time or the last one before that Double valueAtTheTime =...

matlab,function,optimization,maximize

Both your functions return a vector of values whereas fminunc requires that the function returns a scalar / single value. The error is pretty clear. The function fminunc is trying to find the best solution that minimizes a cost function, so what you need to supply is a cost function....

c++,algorithm,optimization,dynamic-programming

First of all, you are right that iteratively combining the best remaining student with the worst remaining student gives you the optimum result (see proof below). However, you don't calculate the cost of that solution correctly. You have to run through all pairs of your combination in order to find...

Optim converges with your initial parameters, so I'm not sure there is a problem. But, you can also try using the alternative optimization routines and run a simple test to see what parameters are giving warnings ## Test results with other methods x1 <- optim(c(17,5,3,4,27,13),LL,method="BFGS",u=u) ps <- x1$par x2 <-...

assembly,optimization,bit-manipulation,division,multiplication

That method is called, "Division by Invariant Multiplication". The constants that you're seeing are actually approximates of the reciprocal. So rather than computing: N / D = Q you do something like this instead: N * (1/D) = Q where 1/D is a reciprocal that can be precomputed. Fundamentally, reciprocals...

If the functions are linear, then they will be at a minimum at the lower end of the range where beta>=0, and at the upper end of the range if beta<=0 - no need to use optimize(). It's not entirely clear what you're expecting the code to do - if...

Create a variable dummy and initialize it to argc. Then in each loop increment dummy by the return of the function. Then return dummy. This should stop the compiler from eliding the function calls.

python,google-app-engine,optimization,twitter,leaderboard

I would organize the task like this: Make an asynchronous fetch call to the Twitter feed Use memcache to hold all the AuthAccount->User data: Request the data from memcache, if it doesn't exist then make a fetch_async() call to the AuthAccount to populate memcache and a local dict Run each...

Note that there are 2n-1 different such names for a string of lengh n, as there can or can not be a _ between any two characters. For your long-long name that makes 134,217,728 variants! Regardless of what algorithm you use, this will take a very long time. One way...

optimization,machine-learning,dataset

I don't know if this format really provides better representation, but I can speculate why it can be more efficient. First, as they state at format description, "Having data of the same precision consecutive enables hardware vectorization."; consider also wikipedia: "Vector processing techniques have since been added to almost all...

php,mysql,arrays,performance,optimization

Database connections are going to be the most expensive part of the algorithm, so minimize the amount of time each connection will last. Pull it from the database and loop through the array. Unless you are doing a function that would be very quick in a database (ie- sorting on...

You can do it in one line using arrayfun, though it looks kind of convoluted and simply looping over p might be faster: function F = myfun(x,p) F = [p*x(1) - x(2) - exp(-x(1)); -x(1) + 2*x(2) - exp(-x(2))]; p_values = -3:.1:3; x = arrayfun(@(p)fsolve(@(x)myfun(x,p), x0), p_values, 'UniformOutput', false) and...

What you are looking for are nonlinear constraints, fmincon can handle it (I only know the command, not the GUI) with the argument nonlcon. For more information look at this guide http://de.mathworks.com/help/optim/ug/fmincon.html How would you implement this? First create a function function [c, ceq] = mycondition(x) c = -max(x)/min(x)/10; ceq...

If the var<xx> variables are all multiples of ten, i.e. there are no other variables beginning with var, you can use the colon-operator, which acts as a wildcard, e.g. drop var: ; /* drop all variables beginning with 'var' */ Alternatively, you can dynamically generate a list of all the...

This code causes undefined behaviour: uint32_t *p = (uint32_t *)vector; uint32_t tmp = p[0]; The memory pointed to by vector is an object of type uint64_t , however you read it via an lvalue of type uint32_t. This violates the strict aliasing rule. Since your program always calls this function,...

MillenialMedia - Pretty decent engineering. Pretty good revenue Inmobi - Good revenue. Sometimes really dodgy ads. I've experiemented with half a dozen others, but they provided insignificant revenue, or provided many bugs. But your mileage may vary....

android,optimization,virtual-machine,loader,dalvik

The /system/app directory is read-only on normal (non-developer) devices, and only updated when the system receives an update. The point of /system/app/*.odex is that the .odex file can be delivered as part of a system update, so it doesn't have to be generated on the first post-update boot, and doesn't...

r,optimization,regression,rscript

This seems to work fine: opt1 <- optim(startparam, fn=ls,method="L-BFGS-B", Val=Val,Hi=Hi,Di=Di, lower =c(20,10,0), upper =c(100,70,25)) note that the values of Val, Hi, Di get passed through optim to the objective function....

r,optimization,integrate,mapply

You are performing a large number of independent integrations. You can speed things up by performing these integrations on separate cores simultaneously (if you have a multicore processor available). The problem is that R performs its calculations in a single threaded manner by default. However, there are a number packages...