Whats the big-O notation of this code?

```
for( int i=1; i<2n; i++)
x=x+1;
```

My answer = `O(2*n)`

Is this correct?

Tag: big-o,time-complexity

Whats the big-O notation of this code?

```
for( int i=1; i<2n; i++)
x=x+1;
```

My answer = `O(2*n)`

Is this correct?

It is O(n). Big O is meant to describe the complexity of the application and in this case it is linear so it is O(n).

hash,hashtable,big-o,time-complexity

When talking about hashing, we usually measure the performance of a hash table by talking about the expected number of probes that we need to make when searching for an element in the table. In most hashing setups, we can prove that the expected number of probes is O(1). Usually,...

time-complexity,recurrence-relation

T(1) = O(1) T(n) = T(n-1) + O(1) This is indeed the signature of a sequential search and it has O(n) time complexity. This is because T(n) = T(n-1) has O(n) complexity and the O(1) term drops out. The solution is trivial: Say we know T(n) = T(n-1), and we...

lowerKey() is a search in a balanced binary search tree, so it's obviously O(log n). You might want to read the source code, e.g. from here, to see how the tree is traversed. Similarly, each operation with a NavigableMap returned from subMap() also requires O(log n) because you will need...

java,arrays,algorithm,time-complexity

You can easily do this in O(n * log n) by sorting: int[] input = //... int[] input = new int[]{2, 6, 7, 3, 9}; Integer[] indices = new Integer[input.length]; for (int i = 0; i < indices.length; i++) { indices[i] = i; } // find permutation of indices that...

math,big-o,time-complexity,asymptotic-complexity

I don't believe that the ordering you've given here is correct. Here are a few things to think about: Notice that 2log4 n = 2(log2 n / log2 4) = 2(log2 n) / 2. Can you simplify this expression? How fast does the function eπ4096 grow as a function of...

Big O,Theta or Omega notation all refer to how a solution scales asymptotically as the size of the problem tends to infinity, however, they should really be prefaced with what you are measuring. Usually when one talks about big O(n) one usually means that the worst case complexity is O(n),...

algorithm,graph,tree,runtime,big-o

Don't know if your algorithm is correct, but it doesn't seem O(|V|) at least, because getting the "smallest edge not in T" of a vertex cannot be done in O(1). The most usual way to add an edge e=(u, v) into a MST T is: Run a BFS in T...

python,time-complexity,space-complexity

Without further information on the problem, your actual task and your solving attempts an answer could merely be adequate...but I will try to at least give you some input. a = [random.randint(1,100) for i in xrange(1000000)] A statement like a = ... is normally considered to have O(1) in terms...

In set theory notation |A| is the cardinality of set A, in other words the number of elements contained in set A. For Reference: http://www.mathsisfun.com/sets/symbols.html...

algorithm,big-o,time-complexity,complexity-theory

Wikipedia says :- Big O notation describes the limiting behavior of a function when the argument tends towards a particular value or infinity, usually in terms of simpler functions. A description of a function in terms of big O notation usually only provides an upper bound on the growth rate...

The worst-case running time will be infinite if you keep on getting i, j such that vals[i-1][j-1] == 0, so the while loop will never terminate. This situation almost surely does not happen though. Let T denotes the expected running time of rand7(), we have the following observation: if i...

arrays,algorithm,random,insert,time-complexity

First consider the inner loop. When do we expect to have our first success (find an open position) when there are i values already in the array? For this we use the geometric distribution: Pr(X = k) = (1-p)^{k-1} p Where p is the probability of success for an attempt....

algorithm,big-o,complexity-theory,exponential

What the entry says is something like this. Suppose the algorithm is exponential with base c, so that for some input of size x, the running time is t ~= cx. Now given a processor twice as fast, with an input just a (I'm calling your constant that) larger, the...

algorithm,recursion,big-o,complexity-theory,recurrence

It looks like the lower bound is pretty good, so I tried to proof that the upper bound is O(log n / log log n). But let me first explain the other bounds (just for a better understanding). TL;DR T(n) is in Θ(log n / log log n). T(n) is...

arrays,data-structures,time-complexity

Here's an implementation that still has O(n) runtime for insert and delete, but which gets lookups running in time O(log n). Have as your data structure a dynamically-allocated, sorted array with no slack space. To perform a lookup, just use binary search in time O(log n). To do an insertion,...

time-complexity,asymptotic-complexity

If we consider C_1 and C_2 such that C_1 < C_2, then we can say the following with certainty (n^C_2)*log(n) grows faster than (n^C_1) This is because (n^C_1) grows slower than (n^C_2) (obviously) also, for values of n larger than 2 (for log in base 2), log(n) grows faster than...

You are not inserting, right? You are assigning, direct to a specific address and you don't need to figure out the right position before hand. That means you don't need to do any loop, don't need to go through any computing before find the position and assign, and the memory...

O(65536 n2 + 128 n log2n) is the same as O(n2 + n log2n) since you can ignore multiplicative constants. O(n2 + n log2n) is equal to O(n2) since n2 grows faster than n log2n. Also, by the way, the base of logarithms doesn't matter in Big-O analysis. All logarithms...

algorithm,big-o,time-complexity,lower-bound

Yes, that's correct. One way to see this is via an adversarial argument. Suppose that you have an algorithm that allegedly finds the maximum value in the array, but doesn't inspect every array element at least once. Suppose I run your algorithm on some array A1 that consists of nothing...

java,collections,time-complexity

Some implementations of LinkedList can keep a count of the number of elements in their list as well as a pointer to the last element for quick tail insertion. This means that they are done O(1) as it is a quick data access. Similar reasoning can be followed for implementations...

time-complexity,complexity-theory

Don't have enough reputation points, hence posting as answer. Perhaps your use of factor in two different senses is the source of the confusion. Time is but one factor out of many possible complexity factors, such as storage, bandwidth, etc. Exponential factors in the case of polynomial algorithms refer to...

Your answer is correct. As for what you need to prove or not, that really depends on the context. If this is for an algorithms paper, you can assume that everyone reading your paper would know this. If this is for a class, the best answer would be to check...

if-statement,for-loop,time-complexity,asymptotic-complexity

We can start by looking at the easy case, case 2. Clearly, each time we go through the loop in case 2, one of either 2 things happens: count is incremented (which takes O(1) [except not really, but we just say it does for computers that operate on fixed-length numbers...

python,algorithm,performance,big-o

The slowest thing you do is copying a part of the list: current = num[lower:upper]. This brings the complexity of this step up from O(n) to O(n•k). What you really should do is just take the elements, which define the unfairness of this sublist directly by their indexes: min_unfairness =...

java,algorithm,performance,substring,time-complexity

Time complexity is O(n). Each insertion (append(x)) to a StringBuilder is done in O(|x|), where |x| is the size of the input string you are appending. (independent of the state of the builder, on average). Your algorithm iterates the entire string, and use String#substring() for each word in it. Since...

c++,c,arrays,linked-list,big-o

2 dimensional array called a[10][10] exists, when I insert a[5][5]=1 is it O(1)? Nope, it is O(row index + column index), because you need to find one linked list by traversal, then the node in it by another traversal. A linked list exists with N nodes, then in order...

c++,algorithm,inheritance,time-complexity

The first is supposedly in O(M*logN) time, where M is the size of the list, and N = number of concrete derived classes of Base It's not though. unordered_map is a hashtable, lookup and insertion have constant complexity on average. So the first is still O(M). Just with more...

Well the important thing to understand is how you're counting the complexity. Let's say you have N entries in the first container and the std::set<int> contains M elements. With a container that allows fast search, the first part is O(1) then printing is O(M). However, if you were using something...

loops,for-loop,big-o,time-complexity,asymptotic-complexity

If c=1 in loop 1 and loop 2 then it will run infinite times right but it is given as logarithimic time why? Yes you are right, if c = 1 then we will get infinite loops for both case 1 and case 2, so the condition "c is...

algorithm,sorting,big-o,time-complexity,complexity-theory

The outer for-loop would run for n times. But, it also holds an inner for-loop which is dependent on the vale of j. I think that the first for-loop test will execute n times, and the nested for-loop will execute 1 + 2 + ... + n = n(n+1)/2 times....

python,algorithm,time-complexity,longest-substring

It's O(N) 'why to use DP of O(N2)' : You don't need to for this problem. Note, though, that you take advantage of the fact that your sequence tokens (letters) are finite - so you can set up a list to hold all the possible starting values (26) and...

algorithm,graph,tree,runtime,big-o

Since everything is still connected and only one edge has been removed, then most (and maybe all) of the spanning tree remains the same. Attempt to construct the same minimum spanning tree, and if the edge that was removed was part of the spanning tree, grab the next smallest edge...

algorithm,time-complexity,convex-hull

Well you do have linear complexity because: For (i=1 ... n) grants an n factor to the complexity so until now O(n) In the nested while loop you have the condition (L size >= 2 && it will also check if you do make a counter-clockwise turn(that should be done...

In your example, if num is prime then it would take exactly num - 1 steps. This would mean that the algorithm's runtime is O(num) (where O stands for a pessimistic case). But in case of algorithm that operate on numbers things get a little bit more tricky (thanks for...

python,list,dictionary,set,time-complexity

nested[1] and nested[2] have no effect whatsoever on the time taken to perform operations on nested[0]. An object has no knowledge of what containers it might be referenced in or what other objects might be in the container with it. List operations take the same amount of time regardless...

It just gets added. See for e.g. : for(i=0;i<n;i++) //statements for(i=0;i<m;i++) //statements So the total complexity is O(m+n). lets say m=3n then its O(4n) which is O(n) only . let m = n^2 then its O(n^2+n) which is O(n^2)...

algorithm,math,recursion,time-complexity

As all the commenters said, I need to use the Akra-Bazzi theorem. C(1) = 1 C(2) = 1 For N > 2 we need to first find 'p' from the following equation : (1/3)^p + (2/3)^p = 1. It is obvious that p = 1. Next we need to solve...

java,algorithm,time-complexity

Complexity is actually O(n^3), if ignoring JIT optimizations (and most possibly online judges turns that off). Since 5000^3 ~= 1.2*10^11, getting TLE is expected. Explanation for time complexity: Look at your code, and pay special attention to the comment I added: for(int i=0;i<n;i++){ String temp=""; for(int j=i;j<n;j++){ temp+=S.charAt(j); // ^^...

algorithm,math,time-complexity,computer-science,recurrence-relation

Here are a few hints : define R(n) = T(n)/(n-1)! solve the recurrence for R(n) express T(n) as a function of R(n) ...

loops,for-loop,time-complexity,nested-loops,asymptotic-complexity

This question is tricky - there is a difference between what the runtime of the code is and what the return value is. The first loop's runtime is indeed O(log n), not O(log log n). I've reprinted it here: p = 0; for (j=n; j > 1; j=j/2) ++p; On...

c++,time-complexity,sparse-matrix

nnz : non-zero number of sparse matrix row_size : matrix row number column_size : matrix column number There are many ways, their space complexity : Compressed Sparse Row (CSR) : 2*nnz + row_size number of memory Compressed Sparse Column (CSC) : 2*nnz + column_size number of memory Coordinate Format (COO)...

Since the O() behavior is asymptotic, you sometimes cannot see the behavior for small values of N. For example, if I set numberOfSizes = 9 and discard the first 3 points for the polynomial fit, the slope is much closer to 3: randomMPolyfit = polyfit(log10(sizesArray(4:end)), log10(randomMAveragesArray(4:end)), 1); randomMSlope = randomMPolyfit(1)...

This will be not a very rigours analysis, but the problem seems to be with the BasicTransformer's transform(Seq[Node]) method[1]. The child transform method will be called twice for a node which is changed. Specifically in your example all the nodes will be called twice for this reason. And you are...