arrays,time,struct,complexity-theory

The complexity of your program is O(n+2) Since you have array of length n. you should need a max of n time units to find the element. Then to get the two values of the struct you need another time units. You will not need 2n because for each position...

algorithm,artificial-intelligence,complexity-theory,nearest-neighbor

I had the same moment of disbelief when reading that axiom ; a parameter of higher value that dicreases complexity seems a bit counterintuitive at first. To put an intuition on this, let's compare a 1-nearest-neighbour trained model, and a N>>1-nearest-neighbours one. Let's use a simplified 2D-plot (two-features dataset) with...

recursion,time,complexity-theory

It's O(n^2), since increasing n results in both increasing the number of the for-calls, as well as the required time for each for-loop to finish. Both changes rise linear, resulting to the formula O(n) * O(n) = O(n^2)....

algorithm,graph,big-o,complexity-theory,bellman-ford

Yes, O(VE + V + E) simplifies to O(VE) given that V and E represent the number of vertices and edges in a graph. For a highly connected graph, E = O(V^2) and so in that case VE + V + E = O(V^3) = O(VE). For a sparse graph,...

I am not sure exactly what you are asking, maybe you can clarify The time complexity of this algorithm is O(n). The loop will execute n times until i is greater than n. i starts at zero and increments by 1 every iteration of this loop until n. I hope...

c#,arrays,list,arraylist,complexity-theory

Both ArrayList and List<T> have the same basic implementation, using an array to store the items, reallocating as necessary when items are added. Indeed, much of List<T> is simply copy/paste from ArrayList, with appropriate changes to support the generics. While there may be small differences in actual performance, the "big-O"...

It's: log3(n) + log3(n2) + ... + log3(nn) = log3(n)n(n+1)/2 in short term: n^2log(n) ...

algorithm,complexity-theory,partition,np

One way of looking at this is to say that you are trying to express a target number, which is total/2, as a sum of the numbers provided. This is really just the problem of making change. Here is a useful lemma: if you have a collection of coins whose...

big-o,complexity-theory,time-complexity

The worst case complexity of your loop is o(N²). The loop is executed N times. Every iteration has another dependency to N, because: s(i) = sum of the first i coordinates of a fixed vector (of dimension N); I can't see any dependecy of N here, so in http://en.wikipedia.org/wiki/Big_O_notation it's...

algorithm,math,big-o,complexity-theory,logarithm

Comments are getting long, so here's a sketch of a proof for you. This is probably homework, so please make sure you learn something instead of just copying it down. In order to show that f is O(n), you have to show that there is an M and n1 where...

algorithm,math,big-o,time-complexity,complexity-theory

Using O(n^1/4) is perfectly fine for big O notation. Here are some examples of fractures in exponents from real life examples O(n) is also correct (because big O giving only upper bound), but it is not tight, so n^1/4 is in O(n), but not in Theta(n) n^1/4 is NOT...

time-complexity,complexity-theory

Don't have enough reputation points, hence posting as answer. Perhaps your use of factor in two different senses is the source of the confusion. Time is but one factor out of many possible complexity factors, such as storage, bandwidth, etc. Exponential factors in the case of polynomial algorithms refer to...

complexity-theory,time-complexity,asymptotic-complexity,amortized-analysis

Yes, it can. Amortized complexity takes into account the frequency with which the worst case appears. Thus as soon as the worst case appears in about 1 in N^2 operations the amortized complexity will be constant. Let's take a simple example - the dynamically expanding array(I will call that vector...

algorithm,time-complexity,complexity-theory,asymptotic-complexity

Given, f(x) = xlogx+3logx^2 = xlogx+6logx // since log (a^b) = b log a As we know, f(x) = O(g(x)), if | f(x) | <= M. | g(x) |, where M is a positive real number. Therefore, for M>=7 and x varying in the real positive range, M . x...

This sort of processing is easier in Java 8. String[] input = {"bat", "bta", "cat", "tca", "vish"}; Collection<List<String>> grouped = Stream.of(input) .collect(Collectors.groupBy(w -> sortLetters(w)) .values(); public static String sortLetters(String w) { char[] letters = w.toCharArray(); Arrays.sort(letters); return new String(letters); } The time complexity is O(n) where n is the number...

algorithm,loops,time-complexity,complexity-theory,probability

Let the length of the string be written as n. n = str.length(); Now, the outer for-loop iterates for the whole length of the String,i.e., n times. Hence,outer for-loop complexity is O(n). Talking about child loop, the inner while loop executes (val)^(1/62) times. So, you can consider the inner while-loop...

algorithm,fft,complexity-theory

looks like homework so no actual code for your case is provided instead some example and hints. for FFT-like algorithms: 1.set the dataset size to power of 2 by zero padding so the division to halves is simple (no remainders) 2.create recursive function to compute your FFT-like stuff in it...

The array returned by Colour.values() always contains the same elements, but values() creates a new array every time it's called (so it's O(n) where n is the number of enum values). And you never modify that array. So calling the method every time you need it is a waste of...

algorithm,sorting,big-o,time-complexity,complexity-theory

The outer for-loop would run for n times. But, it also holds an inner for-loop which is dependent on the vale of j. I think that the first for-loop test will execute n times, and the nested for-loop will execute 1 + 2 + ... + n = n(n+1)/2 times....

algorithm,runtime,complexity-theory,master-theorem

Here is code snippet to find the divisors factors of all numbers from 2 to N done in O(N^3/2). for(int i=2;i<=N;i++) { for(j=2;j*j<=i;j++) { if(i%j==0) { printf("another non-trivial divisor pair for %d is %d,%d",i,j,i/j); } } } Outer loop is O(N) and inner is O(N^1/2)....

algorithm,tree,binary-search-tree,complexity-theory

Your current inorder traversal using recursion to perform the task. That makes it difficult to run more than one at the same time. So, first I would re-write the method to use an explicit stack (example here in C#). Now, duplicate all of the state so that we perform traversals...

c#,complexity-theory,cyclomatic-complexity

The key is in "independent paths". I'm going to rewrite your code to shorten it so we can discuss it. public bool FitsCheckBoxCriteria(TaskClass tasks) { bool E1 = A1 && A2; bool E2 = B1 && B2 && B3 && B4; bool E3 = C1 && C2 && C3; bool...

string,constants,big-o,complexity-theory,notation

Yes, that's correct. The space complexity of storing a new string of length n is Θ(n) because each individual character must be stored somewhere. In principle you could reduce the space usage by noticing that stringval2 ends up being a copy of stringval1 and potentially using copy-on-write or other optimizations,...

In the world of games programming, the idea of "Space Partitioning" is commonly used to optimise code for things like collision detection. A very simple explanation goes as follows. You start off with your objects in your 2D space: And you use an algorithm to divide up the space to...

recursion,big-o,complexity-theory

If you use the master theorem, you get the result you expected. If you want to proof this "by hand", you can see this easily by supposing n = 2m is a power of 2 (as you already said). This leads you to T(n) = 2⋅T(n/2) + O(n) = 2⋅(2⋅T(n/4)...

java,algorithm,complexity-theory

Let's ignore the fact that i = n * n for a minute and just consider the algorithm while(i > 0) { for(int j=0; j<i; j++) { a++; } i=i/2; } i = i/2 says i divides by 2, rounding down. In the worst case, there will be no rounding...

haskell,recursion,data-structures,tree,complexity-theory

concat does not have terrible complexity; it is O(n), where n is the total number of elements in each list but the last. In this case, I don't think it's possible to do any better, with or without an intermediate structure, unless you change the type of the result. The...

algorithm,haskell,optimization,complexity-theory

It's possible to improve the subsequencesOfSize function, that user5402 mentioned. Compare this and this. This is because of (l-n) in the second version, so subsequencesOfSize 3 [1..350] is equal to subsequencesBySize [1..350] !! 347, so a lot of unused lists are created. When filtering subsequences by a predicate p which...

c++,big-o,complexity-theory,time-complexity

This wastes O(n^m) time in a recursive way. void waste ( unsigned n, unsigned m ) { if ( m ) for ( unsigned i=0; i<n; i++ ) waste(n,m-1); } Actually, that was a simple question....

java,algorithm,complexity-theory,space-complexity

The space complexity accounts for the amount of live memory used by the algorithm, therefore the ArrayList<int[]> factors in to the complexity. However, the algorithm description given at the top of your post says that you only need to print the combinations, not return them; if you immediately print out...

algorithm,language-agnostic,time-complexity,complexity-theory,analysis

Answering for the first part :- for i = 1 to 526 for j = 1 to n^2(lgn)^3 for k = 1 to n x=x+1 This program will run 526 * n^2 * (lg n)^3 * n times = 526 * n^3 * (lg n)^3 times. So, x = x...

math,time-complexity,complexity-theory

This statement is sometimes true and sometimes false depending on the base of the logarithm. Examples: O(2lg n) = O(n), where lg is the binary logarithm. O(2log_4 n) = O(n1/2), where log_4 is the logarithm to base 4. O(2log_1.25992 n) = O(n3), because 1.25992 is the cube root of 2....

algorithm,time,complexity-theory

The precise numbers depend on what you consider atomic operations in the algorithmic model. However, it always holds that the entire list is visited, with incrementing the current index i, testing for loop termination and checking for a new minimum in each iteration. whenever the currently visited element of the...

algorithm,time,big-o,complexity-theory,time-complexity

Ahh!!! What is so typical about it! In your inner loops,you have variables named i & j,so you figured out easily the complexity. Just with an addition of EDGE variable which is not at all different from the other 2,you have got confused! Check the number of iterations!!! The outer-loop...

performance,algorithm,big-o,complexity-theory,growth-rate

If you would talk about drawing the line, I'd simply like to deliver like :- The code's complexity :- O(N*o(X)) As soon as one get's to judge the complexity of the function X(N), one can simply substitute in the formula. Till then, it will be a shorthand but useful notation...

algorithm,big-o,complexity-theory,induction

Coming to some basic maths: lg(a/b) = lg(a) - lg(b) This is the reason why: 2(n/2)lg(n/2)+n = n( lg(n) - lg(2)) + n = n( lg(n) - 1) + n About the assumption of n/2, this assumption is the best assumption because it simplifies the induction step. In the induction...

algorithm,big-o,time-complexity,complexity-theory

The purpose of the Big-O notation is to find what is the dominant factor in the asymptotic behavior of a function as the value tends towards the infinity. As we walk through the function domain, some factors become more important than others. Imagine f(n) = n^3+n^2. As n goes to...

algorithm,math,big-o,time-complexity,complexity-theory

No, you haven't made any mistake. You are correct in thinking so and is your solution perfect! The outer-loop will run n-times and the inner loop runs for 5(constant) times. Therefore, the loops complexity will be O(5*n) = O(n) and the other statements are of constant time-complexity. Since, 5*n times...

java,algorithm,complexity-theory,code-complexity

In each iteration x is divided by 5. How many iterations would it take for x to become lower than 1 (and therefore 0)? The answer is log5(n) (logarithm to the base 5 of n), which is O(log(n))....

algorithm,time-complexity,complexity-theory

It's easy to see from calculus that if lim {n -> inf} a(n) / b(n) < inf then a(n) = O(b(n)) Also note that all the functions here go to infinity, so we can use L'Hôpital's rule. Finally, note that, asymptotically, Stirling's Approximation gives lim {n -> inf} n! /...

From the Master Theorem: T(n) = a * T(n/b) + f(n) Where: a is number of sub-problems f(n) is cost of operation outside the recursion; f(n) = O(nc) n/b size of the sub-problem The idea behind this function is that you repeat the operation on the first half of items...

java,algorithm,geometry,complexity-theory,intersection

You can not find all intersection points of n circles in the plane in O(n log n) time because every pair of circles can have up to two distinct intersection points and therefore n circles can have up to n² - n distinct intersection points and hence they can not...

algorithm,recursion,big-o,complexity-theory,recurrence

It looks like the lower bound is pretty good, so I tried to proof that the upper bound is O(log n / log log n). But let me first explain the other bounds (just for a better understanding). TL;DR T(n) is in Θ(log n / log log n). T(n) is...

algorithm,big-o,complexity-theory,time-complexity

No, here the k and m terms are not superfluous,they do have a valid existence and essential for computing time complexity. They are wrapped together to provide a concrete-complexity to the code. It may seem like the terms n and k are independent to each other in the code,but,they both...

Algorithmic complexity has a mathematic definition. If f and g are two functions, f = O(g) if you can find two constants c (> 0) and n such as f(x) < c * g(x) for every x > n. For Ω, it is the opposite: you can find constants such...

java,time,complexity-theory,space,analysis

Your complexity depends on what you choose for n. If n is the number of files, the complexity is O(n) because each file is visited once.

algorithm,big-o,time-complexity,complexity-theory,asymptotic-complexity

Sometimes you can simplify calculations by calculating the amount of time per item in the result rather than solving recurrence relations. That trick applies here. Start by changing the code to this obviously equivalent form: private static IntTreeNode createBST(int[] array, int left, int right) { int middle = array[(left +...

algorithm,time,complexity-theory,master,theorem

Every time you solve a subproblem, you have to divide the current instance into more subproblems (cost = 100 steps). Then, you have to merge the results of the subproblems (cost = 75n steps). That means f(n) = 75n + 100 because f(n) represents the cost of solving a single...

javascript,sorting,underscore.js,complexity-theory

Your algorithm is not a comparison sort: A comparison sort is a type of sorting algorithm that only reads the list elements through a single abstract comparison operation (often a "less than or equal to" operator or a three-way comparison) that determines which of two elements should occur first in...

complexity-theory,np,restrictions

Converting my comment into an answer - consider the "empty problem," a problem whose instance set is empty. Since the empty set is a subset of every set, this problem technically counts as a restriction of any language (including languages not in NP). It's also a problem in P; you...

time-complexity,complexity-theory

Formal definitions of O notations are: f(n) = O(g(n)) means that there exists some constant c and n0 that f(n) <= c*g(n) for n >= n0. f(n) = Ω(g(n)) means that there exists some constant c and n0 that f(n) >= c*g(n) for n >= n0. f(n) = Θ(g(n)) means...

algorithm,recursion,complexity-theory,asymptotic-complexity

If the depth of the stack (recursion) is constant and does not change with respect to the size of the input, then a recursive solution can be O(1) extra space. Some compilers may do tail call optimization (TCO) and remove recursive calls if they are the last statement executed in...

java,complexity-theory,quicksort,median

You are calling exchange too much with indices that you have already read. You can save a lot of reads by replacing those calls with "manual" calls to setItem. Consider for example the following lines: if (!(x.equals(rItem))) { super.exchange(records, median, r); } If you inspect your code a bit, you'll...

Disclaimer: The following approaches will help you reach usable speeds, but will not reduce the time complexity, per se. If your computation is slow, first thing to do is to use the profiler to find out what's slowing you down. In your case its intern_simPrice and especially millions of calls...

haskell,functional-programming,complexity-theory,algebraic-data-types

It's because of immutable state. A list is an object + a pointer, so if we imagined a list as a Tuple it might look like this: let tupleList = ("a", ("b", ("c", []))) Now let's get the first item in this "list" with a "head" function. This head function...

What is wrong with your reasoning is the use of conflicting definitions of "time". When one says that lookup in a hash table takes O(1) time, one usually means that it takes O(1) comparisons, that is, the number of comparisons required to find an item is bounded above by a...

c++,algorithm,complexity-theory,analysis

The runtime of main() is composed of the runtime of some constant-time statements and the runtime of the i-loop: T_main(n, l) ∈ O(1) + T_fori(n, l) The i-loop runs exactly (n - 1) times and is composed of some constant-time statements and the runtime of the j-loop: T_fori(n, l) ∈...

c++,algorithm,random,complexity-theory,time-complexity

It's O(n). You don't really need to calculate the expected return value of getNextTime(). It's enough to know that its return doesn't change in response to the simulation running. Let's assume your code iterates that loop N times for a 1 hour simulation. It's pretty obvious that these are equivalent......

java,algorithm,binary-tree,complexity-theory,space-complexity

You are storing the all the nodes that lie on the path from the root to a particular leaf in the List. If the binary tree is height balanced or it is a full/complete binary tree, the worst case time and space complexity is O(log n), where n is the...

algorithm,complexity-theory,combinatorics,computation-theory

This is NP-Complete problem, and is a generalization of Hitting Set Problem. Proof of NP-Completeness follows. The problem is in NP (trivial - given a solution T, it is easy to check the intersection with each of Gi, and verify if its size is Ci). It is also NP-Complete, assuming...

algorithm,time-complexity,complexity-theory

EDIT: I'll add a bit of explanation to clear up your confusion about the quote in your question. Let's consider a fixed value of i and focus on the innermost two loops: for (int j = 1; j <= i*i; j++) for (int k = 1; k <= j*j; k++)...

java,algorithm,complexity-theory,code-complexity

start with inner loop : for(int k = 0; k < n; k++) x++; is obviously O(n). now one layer above that : for(int j = 0; j < n/4; j++) is O(n) because it takes n/4 for j to reach the n and we know that O(n/4) = O(n)...

complexity-theory,time-complexity,turing-machines

The deterministic Turing Machine simply emulates the non-deterministic Turing machine. Each time the NDTM takes a fork, the DTM pushes one branch and takes the other. When it has followed one possible chain for p(S) steps without reaching an accepting state, it backtracks to a previous branch point. This assumes...

algorithm,math,big-o,time-complexity,complexity-theory

The time an algorithm takes to execute is proportional to the number of steps covered(reduced here). In your linear searching of the array, you have assumed that dN(t)/dt = N(t). Incorrect Assumption :- dN(t)/dt = N(t) dN(t)/N(t) = dt t = ln(N(t)) + d Going as per your previous...

algorithm,big-o,time-complexity,complexity-theory,exponential

Well, your method is also concrete. You should proceed in the same direction. Currently, I also don't have a better option. 4^n = ((2^2)^n) = (2^2n) = (2^n) * (2^n) > 2^n for all values of n>0. As, (2^n) * (2^n) > O(2^n) . This is because (2^n) * (2^n)...

algorithm,time-complexity,complexity-theory,asymptotic-complexity,big-theta

Outermost loop will run for ceil(log n) times. The middle loop is dependent on the value of i. So, it's behaviour will be : 1st iteration of outermost-loop - 1 2nd iteration of outermost-loop - 2 ..................................... ceil(log n) iteration of outermost-loop - ceil(log n) Innermost loop is independent of...

algorithm,complexity-theory,theory,graph-theory

I get the impression from papers like http://research.microsoft.com/pubs/144985/todsfinal.pdf that there is no algorithm that does better than O(VE) or O(V^3) in the general case. For sparse graphs and other special graphs there are faster algorithms. It seems, however, that you can still make improvements by separating "index construction" from "query",...

search,big-o,time-complexity,complexity-theory,binary-search

For binary search, the array should be arranged in ascending or descending order. In each step, the algorithm compares the search key value with the key value of the middle element of the array. If the keys match, then a matching element has been found and its index, or position,...

java,algorithm,complexity-theory

When determining complexities, we don't include constants or coefficients. Instead of O(2N + 2), it should be O(n). We only care about numbers if they're exponential, i.e. 2^n or n^2, log2(n), etc. Putting that aside, are you sure this is O(n)? O(n) would mean that it runs n times, but...

algorithm,big-o,time-complexity,complexity-theory

Wikipedia says :- Big O notation describes the limiting behavior of a function when the argument tends towards a particular value or infinity, usually in terms of simpler functions. A description of a function in terms of big O notation usually only provides an upper bound on the growth rate...

complexity-theory,time-complexity

Correct. In general, I would say that the term 'hard' in this statement corresponds to a complexity class, not to a degree of the polynomial. Or, rather, a 'hardness' of a problem is the minimal complexity class containing this problem. That is, if A is at least as hard as...

algorithm,sorting,complexity-theory

You can determine whether some element x is in the new (shuffled) set by doing two binary searches. The key here is that the odd elements essentially act as "keys" for each other, since they have been swapped in disjoint pairs. Use a standard binary search to look for x,...

algorithm,big-o,complexity-theory,big-theta

The Master theorem doesn't even apply, so not being able to use it isn't much of a restriction. An approach which works here is to guess upper and lower bounds, and then prove these guesses by induction if the guesses are good. a(0) = 0 a(1) = 1 a(2) =...

That's geometric progression The first time the inner loop is executed n times. The second time it is executed n/2 times. etc... So we have the sequence: n + n/2 + n/4 + ... + 1 so the final formula is: n*(1 - (1/2)^(log n))/(1/2) which is equivalent to n...

Overview Design complexity measures the dependency of a method on other methods; cyclomatic complexity measures the number of distinct paths through a method; and extended cyclomatic complexity basically combines the first two metrics into one. Details Design complexity This metric reports the design complexity of a method. The design complexity...

python,algorithm,complexity-theory

O(n) solution: Go from left to right. Remember the first peak, find a higher peak (or a same height one), then draw a lake between them, than remember this higher peak and repeat the process. Then do the same going right to left. Its as simple as that. (Code is...

big-o,complexity-theory,time-complexity

Each loop for it self has the time complexity Θ(log₂n), as you say. Since the loops does not depend on each other, i.e. the variables i, j and k have no effect on each other, The complexity of the three nested loops can simply multiplied and you get Θ((log₂n)⋅(log₂n)⋅(log₂n)) =...

Yes, time complexity is O(n^3). And yes, space complexity is O(1). The only space used is constant number of indices. A way to make your algorithm more efficient and shorten the amount of comparisons is to reverse your inside for loop. Right now, you are basically checking every possible combination...

java,math,complexity-theory,mathematical-optimization,algebra

Mathematically, this corresponds to finding all linear combinations of the kernel vectors using all possible sets of coefficients that are n-tuples mod p. It amounts to a matrix multiplication mod p between a p^n x n coefficient matrix and a n x m kernel matrix. The p^n x n matrix...

c,loops,time,nested,complexity-theory

Your double loop has a time complexity of O(n), since the total number of iterations is n. Modifing the loop variable of a for loop is considered bad practice. I would use while loops instead. Note that the inner loop has a bug - it needs to check that it...

First Question: Assume you know that for the value of n - 2, Proc is called T(n-1) times. Therefore, for the value of n, T(n) = 1 + 2T(n-2), as there would be one call to Proc(n) which would in turn call Proc(n-2) twice. T(n) = 1 + 2T(n-2) is...

big-o,time-complexity,complexity-theory,binary-search

As always, there can be a difference of the complexity of an algorithm and its implementation. If you implement binary search in a recursive function, that takes a copy of a part of the input array, this array has to be generated in O(n), so you are right, that this...

arrays,algorithm,complexity-theory,probability-density

In a nutshell, you have a transformation F (called discrete Fourier transform) that maps the set of vectors of size N onto itself and such that F(a*b) = F(a).F(b) where * is the convolution operator you just described and . is the standard dot product. Moreover Fis invertible and you...

big-o,complexity-theory,time-complexity,space-complexity

You're totally correct in simplifying O(a+b) = O(a) as per this case. It's so because a>=b (given) O(a+b) <= O(a+a) = O(2a) = O(a) // as clearly mentioned by you. Example :- Let's assume a = n; b = log(n). Then,you can see O(a+b) = O(n+log(n)) = O(n) = O(a)....

algorithm,big-o,complexity-theory,exponential

What the entry says is something like this. Suppose the algorithm is exponential with base c, so that for some input of size x, the running time is t ~= cx. Now given a processor twice as fast, with an input just a (I'm calling your constant that) larger, the...

time,complexity-theory,time-complexity

It can easily be seen as s is growing quadratic in terms of number of iteration. s = 1 // set as 1 initially Now we are adding S = s + i // Where i increasing linearly by one unit in each iteration //So it's just a addition i.e....

io,complexity-theory,finite-state-machine,automaton

For a Deterministic FSM (i.e., one without epsilon state transitions), there is a unique input sequence leading to the state if and only if the following conditions are met: 1) There must exist a path to the state. (An isolated unreachable state could not qualify). 2) There is no path...

php,algorithm,big-o,complexity-theory,time-complexity

** means power. Hence, n**3 means n^3. Complexity is of the order n^3 or O(n^3)

complexity-theory,np-complete,np

From a formal languages perspective, there are only countably many problems in P and uncountably many problems not in P. Every problem in P can be solved by a deterministic, polynomial-time Turing machine, and since the number of TMs is countably infinite, the number of languages in P is countably...

algorithm,complexity-theory,time-complexity

Dot product takes m-1 additions and m multiplies. Vector normalization takes 1 vector square (dot product), 1 square root and m divisions i.e. m-1 +, m *, m /, 1 √ Subtraction of a vector projection takes 1 dot product, m multiplies and m additions, i.e. 2m-1 +, 2m *...

I think this depends on how you define MAX-3SAT. If you define MAX-3SAT as the function problem "given a 3CNF formula, produce a variable assignment maximizing the number of satisfied clauses," then it's neither NP-complete nor co-NP-complete. NP and co-NP are classes of decision problems and therefore no function problem...

complexity-theory,time-complexity

The way to solve this is to look at what the functions are actually doing. There is no general rule. You just have to switch on your brain. When you call foo (1000), how many calls foo (999) will be made at least? How many calls foo (998) will be...

java,algorithm,big-o,complexity-theory

I thinky the code you provided is kind of chaotic. So this post is more about the conceptual algorithm instead of the real algorithm. This can differ a bit since for instance insertion in an ArrayList<T> is not O(1), but I'm confident that you can use good datastructures (for instance...

python,max,time-complexity,complexity-theory

Recursive method decreases the input size by one each call, so it's theoretically linear (since you're essentially doing a linear search for the maximum). The implementation of Python's lists is going to distort a timer-based analysis.

algorithm,big-o,complexity-theory

The inner loop iterates over the array of all primes below sqrt(i). So you have to calculate the number of elements in that array. In the case of an array of primes, you have to use approximations for π(i), the number of primes below i. You can approximate them by...

python,list,math,complexity-theory

Yes, your solution is O(N) and it's because you iterate through the list only once and each iteration is O(1). Your previous solution also iterated through the list one but it summed all elements inside each iteration which made each iteration O(N) too, resulting in the total complexity of O(N^2)....

algorithm,loops,complexity-theory

The easiest way is probably to count them separately. Task 3 is executed: 1+2+3+...+n = n(n+1)/2 times. Tasks 1, 2 and 4 are executed n times each. So (assuming each task takes O(1)) we have a complexity of O(n(n+1)/2 + 3n) = O(n²/2 + n/2 + 3n) = O(n²) (constant...

You don't use recursion to find the complexities, you need to find the complexities within the recursion. I think if I clarify a couple things, you'll see what you need to do. Recursion First off, recursion is simply a function that calls itself. Pretty straightforward. A really simple recursive function...