loops,for-loop,time-complexity,nested-loops,asymptotic-complexity

This question is tricky - there is a difference between what the runtime of the code is and what the return value is. The first loop's runtime is indeed O(log n), not O(log log n). I've reprinted it here: p = 0; for (j=n; j > 1; j=j/2) ++p; On...

algorithm,asymptotic-complexity,proof,growth-rate

Method 1: Let's take: f(n) = n^2 g(n) = n*logn f(n) is in Ω(g(n)) if there is c > 0 and n0 > 0 such that: f(n) >= c*g(n) for every n >= n0. This means: n*n >= c*n*logn |:n ( n>=n0>0 ) n >= c*logn Let's choose c=1 and...

algorithm,big-o,asymptotic-complexity,growth-rate

There's a bit of confusion here: the algorithm runtime doesn't depend on n being even or odd, but on whether the numbers in A are even or odd. With that in mind, what sort of input A would make Algorithm S run faster? Slower? Also: it doesn't make sense to...

algorithm,asymptotic-complexity

The first example is O(nlogn) because the outer loop repeats log(n) times, and each of its repeat require O(n) repeats of the inner loop - so this totals in O(log(n) * n) = O(nlogn). However, in the 2nd example - the outer loop requires O(n) iterations, and for each iteration...

algorithm,mergesort,asymptotic-complexity

To merge two lists of size M, you need 2M-1 comparisons, for O-purposes we can ditch the -1. For a list of length N, you need to go do dividing and merging on log(N) levels, in each of those you have less than N comparisons. That makes a total number...

algorithm,asymptotic-complexity,code-complexity

Yes, this is O(n^2), assuming the function executed is O(1), of course, and the iterator is also O(1) on average per iteration (which is usually a valid assumption). Note that even if you optimize it further, you are going to process Choose(n,2) elements, and Choose(n,2)=n(n-1)/2, which is still in O(n^2)....

algorithm,asymptotic-complexity,recurrence

If you try to apply the master theorem to T(n) = 2T(n/2) + n/log n You consider a = 2, b = 2 which means logb(a) = 1 Can you apply case 1?0 < c < logb(a) = 1. Is n/logn = O(n^c). No, because n/logn grow infinitely faster than...

time-complexity,asymptotic-complexity

If we consider C_1 and C_2 such that C_1 < C_2, then we can say the following with certainty (n^C_2)*log(n) grows faster than (n^C_1) This is because (n^C_1) grows slower than (n^C_2) (obviously) also, for values of n larger than 2 (for log in base 2), log(n) grows faster than...

algorithm,time-complexity,complexity-theory,asymptotic-complexity

Given, f(x) = xlogx+3logx^2 = xlogx+6logx // since log (a^b) = b log a As we know, f(x) = O(g(x)), if | f(x) | <= M. | g(x) |, where M is a positive real number. Therefore, for M>=7 and x varying in the real positive range, M . x...

performance,algorithm,big-o,bigdata,asymptotic-complexity

If the odds of hitting a non-unique value are low, your best bet will be to select 3 random numbers from the array, then check each against the entire array to ensure it is unique - if not, choose another random sample to replace it and repeat the test. If...

.net,string,asymptotic-complexity

.Net strings do not treat non-BMP code units as single characters; string APIs will return surrogate pairs as two characters. Therefore, as far as string is concerned, UTF-16 is not a variable-length encoding. If you want to see UTF-32 code units, call Char.GetNumericValue(), which is O(n)....

loops,for-loop,big-o,time-complexity,asymptotic-complexity

If c=1 in loop 1 and loop 2 then it will run infinite times right but it is given as logarithimic time why? Yes you are right, if c = 1 then we will get infinite loops for both case 1 and case 2, so the condition "c is...

python,python-3.x,set,big-o,asymptotic-complexity

I don't understand your second option, but iterating a list is O(n) and you must iterate the list to convert it to a set. Any operation on each element - eg hashing - is a constant factor which is dominated by the linear time of iteration.

That is not n**2 combinations Big-O notation is about the most significant factor, and the number of combinations, excluding duplicates (eg. (10, 10)) is still proportional to n2. In fact the total number of combinations without the duplications is n2-n. In big-O you only keep the most significant term,...

algorithm,recursion,complexity-theory,asymptotic-complexity

If the depth of the stack (recursion) is constant and does not change with respect to the size of the input, then a recursive solution can be O(1) extra space. Some compilers may do tail call optimization (TCO) and remove recursive calls if they are the last statement executed in...

c,algorithm,stack,asymptotic-complexity

Pseudo-code copied here for convenience: Push the first element to stack. Pick rest of the elements one by one and follow following steps in loop. Mark the current element as next. If stack is not empty, then pop an element from stack and compare it with next. If next is...

algorithm,recursion,time-complexity,asymptotic-complexity

Number 1 I can't tell how you've derived your formula. Usually adding terms happens when there are multiple steps in an algorithm, such as precomputing data and then looking up values from the data. Instead, nested for loops implies multiplication. Also, the worst case is the best case for this...

algorithm,big-o,asymptotic-complexity

Which is the correct method? Big O value is same. But if someone ask to give step count for the algorithm how do I answer? It's a matter of convention: some people count additions and multiplications together, others just multiplications, because they would typically be more demanding operations than...

if-statement,for-loop,time-complexity,asymptotic-complexity

We can start by looking at the easy case, case 2. Clearly, each time we go through the loop in case 2, one of either 2 things happens: count is incremented (which takes O(1) [except not really, but we just say it does for computers that operate on fixed-length numbers...

performance,algorithm,recursion,asymptotic-complexity

T(n) = O(n), because a logarithm of 4 base 4 is 1 and 3 * log(n) is O(n ^ 0.5)(0.5 < 1). It corresponds to the first case of the Master theorem as described here.

algorithm,time-complexity,complexity-theory,asymptotic-complexity,big-theta

Outermost loop will run for ceil(log n) times. The middle loop is dependent on the value of i. So, it's behaviour will be : 1st iteration of outermost-loop - 1 2nd iteration of outermost-loop - 2 ..................................... ceil(log n) iteration of outermost-loop - ceil(log n) Innermost loop is independent of...

data-structures,asymptotic-complexity,space-complexity,skip-lists

If you don't set an upper bound on the height of the skip list, then there is no upper-bound to the worst-case space usage. For any bound you could place, there's some horribly unlucky and astronomically improbable execution of the skiplist that would result in a number of layers so...

algorithm,sorting,big-o,computer-science,asymptotic-complexity

The reason that comparisons are such an important factor, is because you could always store things by pointer, worst comes to worst, and make swaps the same for any types of keys. These simple system tricks will not work for comparisons. Comparing strings by lexicographic order is inherently more expensive...

arrays,algorithm,sorting,big-o,asymptotic-complexity

The problem, as I understand it, is that we want to find the sums a1 + b1 a1 + b2 ... a1 + bn a2 + b1 a2 + b2 ... a2 + bn ... ... ... ... an + b1 an + b2 ... an + bn and print...

There's a methodology using Sigma notation that is precise enough: ...

algorithm,asymptotic-complexity

You're talking about the bogosort algorithm.. To answer your questions: What is a best case for GoofySort? In the best case, the array is already sorted, or the first randomisation sorts it. What is the running time in the best case? Either the time it takes to check it's sorted,...

algorithm,sorting,asymptotic-complexity,radix

Assuming we use bucket sort for the sorting on each digit: for each digit (d), we process all numbers (n), placing them in buckets for all possible values a digit may have (b). We then need to process all the buckets, recreating the original list. Placing all items in the...

python,algorithm,asymptotic-complexity

Assuming this function indeed generates all possible combinations of length k, its time complexity of this function is O(n!/[(n-k)!k!] * k^2). There are exactly O(n!/[(n-k)!k!]) k-combinations, and we generate each of them. Let's look on the geration of each. It is done by creating a tuple iteratively. First the 1st...

algorithm,asymptotic-complexity,proof,growth-rate

You're not thinking of this at large enough scale: If you have a variable, n multiplied by a constant k and n is growing, the effect of k is minimised. And it becomes a limit problem, from grade school calculus....

The implementation of vector uses internal GHC functions called primops. You can find them in the package ghc-prim which is hard-wired into GHC. It provides, among others, the following array functions: newArray# :: Int# -> a -> State# s -> (#State# s, MutableArray# s a#) readArray# :: MutableArray# s a...

No. The difference between bases is a difference by a constant, and constants are not considered in asymptotic efficiency. In this case, f(n) = O(g(n)) = O(lg(n)) In fact, f(n) = Θ(g(n)) = Θ(lg(n)) ...

complexity-theory,time-complexity,asymptotic-complexity,amortized-analysis

Yes, it can. Amortized complexity takes into account the frequency with which the worst case appears. Thus as soon as the worst case appears in about 1 in N^2 operations the amortized complexity will be constant. Let's take a simple example - the dynamically expanding array(I will call that vector...

algorithm,big-o,asymptotic-complexity

Ali Amiri's intuition is correct, but it's not a formal argument. Really there needs to be a base case like T(n) = 1 for all 0 ≤ n < 9 and then we can write 1/2 n ≤ n/3 for all n ≥ 9 and then guess and check a...

math,big-o,time-complexity,asymptotic-complexity

I don't believe that the ordering you've given here is correct. Here are a few things to think about: Notice that 2log4 n = 2(log2 n / log2 4) = 2(log2 n) / 2. Can you simplify this expression? How fast does the function eπ4096 grow as a function of...

algorithm,asymptotic-complexity,proof,growth-rate

Your statement is not true, consider the following counter-example: Take f(n) = 2n2 = O(n2) and g(n) = n2 = O(n2). We have: (f-g)(n) = n2, which is definitely not a constant and hence (f-g)(n) ≠ O(1)....

algorithm,for-loop,big-o,time-complexity,asymptotic-complexity

Well, you can proceed formally as such: ...

algorithm,asymptotic-complexity

Assuming arr and brr are NOT hard coded, but arguments, the complexity of the program is O(n*m), where n is the size of arr, and m is the size of brr. If they are both approximately (asymptotically) at the same length, that means the complexity is O(n^2). You can arrive...

algorithm,big-o,time-complexity,complexity-theory,asymptotic-complexity

Sometimes you can simplify calculations by calculating the amount of time per item in the result rather than solving recurrence relations. That trick applies here. Start by changing the code to this obviously equivalent form: private static IntTreeNode createBST(int[] array, int left, int right) { int middle = array[(left +...

java,arraylist,hashmap,time-complexity,asymptotic-complexity

Firstly, an operation of complexity O(1) does not always take lesser time than an operation of complexity O(n). O(1) only means that the operation takes a constant time (which could be any value), regardless of the size of the input. O(n) means that the time required for the operation increases...

algorithm,time-complexity,analysis,pseudocode,asymptotic-complexity

To me this maps g(n)=n^2 function since loop for i=1 to n runs n times if you count all values from range [1,n]. Of course if it is [1,n) then you have g(n)=(n-1)^2 but that's matter of convention. Two nested loops , each running n times give you O(n^2) complexity...

arrays,algorithm,sorting,asymptotic-complexity

This logic is certainly correct, but it is not fast enough to beat O(n) because you check every element. You can do it faster by observing that if A[i]==i, then all elements at j < i are at their proper places. This observation should be sufficient to construct a divide-and-conquer...