I wrote this Python code to do a particular computation in a bigger project and it works fine for smaller values of `N`

but it doesn't scale up very well for large values and even though I ran it for a number of hours to collect the data, I was wondering if there was a way to speed this up

```
import numpy as np
def FillArray(arr):
while(0 in arr):
ind1 = np.random.randint(0,N)
if(arr[ind1]==0):
if(ind1==0):
arr[ind1] = 1
arr[ind1+1] = 2
elif(ind1==len(arr)-1):
arr[ind1] = 1
arr[ind1-1] = 2
else:
arr[ind1] = 1
arr[ind1+1] = 2
arr[ind1-1] = 2
else:
continue
return arr
N=50000
dist = []
for i in range(1000):
arr = [0 for x in range(N)]
dist.append(Fillarr(arr).count(2))
```

For `N = 50,000`

, it currently takes slightly over a minute on my computer for one iteration to fill the array. So if I want to simulate this, lets say, a 1000 times, it takes many hours. Is there something I can do to speed this up?

Edit 1: I forgot to mention what it actually does. I have a list of length `N`

and I initialize it by having zeros in each entry. Then I pick a random number between `0`

and `N`

and if that index of the list has a zero, I replace it by `1`

and its neighboring indices by `2`

to indicate they are not filled by `1`

but they can't be filled again. I keep doing this till I populate the whole list by `1`

and `2`

and then I count how many of the entries contain `2`

which is the result of this computation. Thus I want to find out if I fill an array randomly with this constraint, how many entries will not be filled.

Obviously I do not claim that this is the most efficient way find this number so I am hoping that perhaps there is a better alternative way if this code can't be speeded up.