python,iteration,frequency,choice,population

Your issue is that you're not resetting p_freq= 0.1, and q_freq= 1 - p_freq after each simulation. You need to reset them in your: for sim in range(n): loop (otherwise they retain the value from the last sim).

The primary issue is that you need to leave the period as a float, because 8000/789 and 8000/739 otherwise round to the same integer: float period = (float)sampleRate / (float)freq; for(int i = 0; i<numSamples; i++){ samples[i] = ((int)(i/period) % 2)==0 ? 0 : 1; } As a secondary stylistic...

#include <stdio.h> #include <stdlib.h> #include <time.h> int main(void){ int i; int numbers[5], frequences[50]={0}; srand(time(NULL)); for(i=0; i<5; ++i){ numbers[i] = rand()%50;//bias -1 ++frequences[numbers[i]]; } for(i=0; i<50; ++i){ if(frequences[i]) printf("%d->%d\n", i+1, frequences[i]); } return 0; } ※I have updated since questions have been changed. #include <stdio.h> #include <stdlib.h> #include <string.h> #include <time.h>...

If your data is a factor with appropriate levels, then you'll have no problem: > x <- factor(letters[1:3]) > y <- factor(letters[1:3], levels = letters) > table(x) x a b c 1 1 1 > table(y) y a b c d e f g h i j k l m...

datetime,pandas,count,time-series,frequency

It might be easiest to turn your Series into a DataFrame and use Pandas' groupby functionality. If your Series is called s, then turn it into a DataFrame like so: >>> df = pd.DataFrame({'Timestamp': s.index, 'Category': s.values}) >>> df Category Timestamp 0 Facebook 2014-10-16 15:05:17 1 Vimeo 2014-10-16 14:56:37 2...

c++,encryption,dictionary,frequency,substitution

What you're basically discovering is that a "partially good" solution already returns those words correctly when just the letters used in the word are correctly substituted. It doesn't matter to much if you have the Q and X mixed up, which is a real risk since they're both rare. So,...

Instead of using the regex, read the file as words=f.readlines() . You'll end up with a list of strings corresponding to each line. Then, build the counter from that list.

c++,group,frequency,spatial-index,r-tree

R-trees are one of the most useful spatial indexing data structures yet are proven useful for specific domains and problems. That been said, that's not a reason to refrain from being didactic (after all what's asked may be a simplification of the actual problem). If you choose to use R-trees...

matlab,signal-processing,fft,frequency,continuous-fourier

I can't get your pictures to load over my proxy, but the spectrum of a FFT will be have a bigger "gap" in the middle at a higher sampling rate. A fundamental property of sampling is that it introduces copies of your original spectrum; you may have learned this if...

termFreq is meant to be used on a document, not a corpus. If you want to filter on frequency when building your DocumentTermMatrix, you use the DocumentTermMatrix function DTM <- DocumentTermMatrix(MyCorpus , control = list(bounds=list(global = c(4, Inf)))) Here's an example... library(tm) Data<-data.frame(Text=c("aaa bbb aaa ddd","bbb aaa aaa bbb ccc","bbb...

I found your problem: if cc >= num: this test should be: if cc > num: On your first iteration as the first letter will equal the first string letter (obviously) and you will enter if ch.lower() == string[index].lower():. This will set cc to 1 which will, in turn, set...

excel,macros,count,frequency,countif

Assuming the data as you give it is in A1:B18 (with headers in row 1), enter this in B2: =IF(A1<>A2,MATCH(TRUE,INDEX(A2:A$1000<>A2,),)-1,"") Copy down as required. Amend the 1000 to a sufficiently higher row reference if necessary. Regards...

ios,core-audio,frequency,goertzel-algorithm

The Goertzel algorithm measures energy at a specific frequency, not musical pitch (which is a different psycho-acoustic phenomena). The strongest spectral frequencies produced by many stringed instruments and voices are likely to be overtones or harmonics instead of the pitch frequency, and spectral frequency is what a Goertzel filter sees...

java,binary-search-tree,frequency

There are two issues in your code. This performs a pointer comparison: if ( w == n.getData() ). You want to compare the data inside the objects, so instead write if ( w.equals(n.getData()) ). But now you still need to override Word.equals() so that it returns true whenever the two...

r,grouping,dataframes,frequency,counting

You can try within(cust, Frequency <- ave(seq_along(Cust_no), Cust_no, FUN=seq_along)) # Txn_date Cust_no Credit Frequency #1 2013-12-02 12345000 400.00 1 #2 2013-12-02 12345000 300.00 2 #3 2013-12-02 12345000 304.71 3 #4 2013-12-02 12345000 475.00 4 #5 2013-12-02 12345000 325.00 5 #6 2013-12-02 34567890 1390.00 1 #7 2013-12-02 34567890 100.00 2 #8...

You can use apply with MARGIN=2 to loop through the columns, subset the elements that are 0:8 (x %in% 0:8), convert to factor with levels specified as 0:8 and use table to get the frequency of elements. apply(A, 2, function(x) table(factor(x[x %in% 0:8], levels=0:8))) Or another option would be to...

Here's a simple option, using data.table instead: library(data.table) dt = as.data.table(your_df) setkey(dt, id, date) # in versions 1.9.3+ dt[CJ(unique(id), unique(date)), .N, by = .EACHI] # id date N # 1: Andrew13 2006-08-03 0 # 2: Andrew13 2007-09-11 1 # 3: Andrew13 2008-06-12 0 # 4: Andrew13 2008-10-11 0 # 5:...

Ultimately I find no error in your code as it runs without error. The part I think you are missing is the calculation of the class frequencies and you will get your answer. Quickly running through the different objects you provide I suspect you are looking at buys. buys <-...

r,vector,matrix,cluster-analysis,frequency

You can match V1, V2 etc against the unique levels then tabulate the results. uKmers <- levels(as.factor(arrayKmers)) freqKmers <- apply(arrayKmers, 2, function(x){ tabulate(match(x, uKmers), length(uKmers)) } ) > t(freqKmers) [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [1,] 0 1 0 0 1 1 0 1 [2,] 0 1 0...

audio,frequency,freeswitch,pbx

I use "sox" now for resampling the audio file. You can execute the commandline tool in the script. When somebody knows another function or method in freeswitch for sending in another frequency please tell me

Using lubridate(for group by day) and data.table library(data.table) library(lubridate) setDT(df) df[Event!=shift(Event, fill=0), sum(Event), by=floor_date(Date, unit="day")] # floor_date V1 #1: 2002-04-27 2 #2: 2002-04-28 0 df used in above example df <- data.frame(Date=seq(as.POSIXct("2002-04-27 19:30:00 ", tz="GMT"), as.POSIXct("2002-04-28 07:00:00 ", tz="GMT"), by="30 min"), Event=c(0L, 0L, 0L, 0L, 1L, 1L, 0L, 0L, 1L,...

r,statistics,histogram,frequency

Here's an alternative solution, that is based on by-function, which is just a wrapper for the tapply that Jilber suggested. You might find the 'ex'-variable useful: set.seed(1) dat <- data.frame(First = LETTERS[1:3], Second = 1:2, Num = rnorm(60)) # Extract third column per each unique combination of columns 'First' and...

Use table after converting your data to a factor: nominal.vals <- 0:9 x <- c(1, 1, 1, 0, 0, 3, 1, 3, 3) table(factor(x, levels=nominal.vals)) # 0 1 2 3 4 5 6 7 8 9 # 2 4 0 3 0 0 0 0 0 0 ...

I think you are missing the freq argument. You want to create a column indicating how often each project happened. I, therefore, transformed your data using count in the dplyr package. library(dplyr) library(wordcloud) cd <- data.frame(Hours = c(2,3,4,2,1,1,3), Project = c("a","b","b","a","c","c","c"), Period = c("2014-11-22","2014-11-23","2014-11-24", "2014-11-22", "2014-11-23", "2014-11-23", "2014-11-24"), stringsAsFactors =...

I found an answer myself: I calculate the difference between the two peaks with the lowest frequency values and with energy values above a certain threshold. Then, I check if that difference is (within a certain range) in the list of frequencies.

matlab,matrix,frequency,noise,sin

Added following line on top of your code: t = 0:186.52/(373046-1):186.52 ; Above vector hold time instants where we want to calculate the value of signal. Length of signal in time is 186.52 and want 373046 samples during time. So separation between two samples is 186.52/(373046-1) seconds....

import java.util.HashMap; import java.util.Scanner; import java.util.Set; public class Countcharacters { /** * @param args */ static HashMap<String, Integer> countcharact=new HashMap<>(); static HashMap<String, String> linenumbertrack=new HashMap<>(); static int count=1; static void countwords(String line){ //System.out.println(line); String[] input=line.split("\\s"); int j=0; String linenumber=""; for(int i=0;i<input.length;i++){ //System.out.println(input[i]); if(countcharact.containsKey(input[i])==true){...

matlab,matlab-figure,frequency,spectrum

Ok I think I've figure it out, you need to pass in more than one data to the same step call like so: Fs = 12e6; hss = dsp.SpectrumAnalyzer('SampleRate', Fs); data = randi([0 1],2000,1);%I had to increase the # of points %% OQPSK Modulate data hMod = comm.OQPSKModulator('BitInput',true); tx =...

python,sorting,csv,pandas,frequency

This seems to do what you want, basically add a count column by performing a groupby and transform with value_counts and then you can sort on that column: In [22]: df['count'] = df.groupby('CompanyName')['CompanyName'].transform(pd.Series.value_counts) df.sort('count', ascending=False) Out[22]: CompanyName HighPriority QualityIssue count 5 Customer3 No User 4 3 Customer3 No Equipment 4...

I think your problem is one of sampling - your sampling frequency is too low for the signal you are trying to represent. I suggest that you debug by explicitly computing freq = 7E8/(2*pi); t = 1 + linspace(0, 4E-7, 1001); multiplier = linspace(1,2,1001).^2; omega_t = 2*pi*freq*t.*multiplier; d_omega_t = diff(omega_t);...

Make a list then use lapply MyList <- list("s1"=s1, "s2"=s2, "s3"=s3) lapply(MyList,function(x) length(x[x == -4]) The result is a list with the count of -4 for each list element You can replace lapply with sapply if you want a vector of counts instead of a list, this can be useful...

Your basic strategy to this is to create a dataset that is not patient level, but one observation is one patient-diagnostic code (so up to 9 observations per patient). Something like this: data want; set have; array diag[9]; do _i = 1 to dim(diag); if not missing(diag[_i]) then do; diagnosis_Code...

vector,scheme,iteration,racket,frequency

Jens' answer is of course completely correct. You also ask whether there's another better way to write this code, and indeed I believe there is. I would probably write this code like this: #lang racket (define (frequency? test-num solutions) (for/sum ([value (in-vector solutions)]) (if (equal? value test-num) 1 0))) (define...

slider,control,music,frequency

Your for loops aren't exactly the same. The first option goes through { 0, 1, ..., 7 } The second option goes through { 8, 7, ..., 1 } Notice also that control[8] is undefined (0..7). So when it tries to reference this location the application runs into an error....

python,list,frequency,custom-lists,vocabulary

If you want to consider words ending with punctuation you will need to clean the text also i.e 'yields' and 'yields!' from collections import Counter c = Counter() import re vocabulary_list = ['accounting', 'actual cost','yields', 'zero-bond'] d = {k: 0 for k in vocabulary_list} sample_text = "Accounting experts predict actual...

Here's a little example. dat2 is just dat1 with three values changed to show that we are finding them to be different from those in dat1. > dat1 <- read.table(h=T, text = "V1 V2 1 xbc 1 xbd 1 xbf 2 xbr 2 xbt 3 xbu 3 xbi 3 xbo")...

data work.claims_data; input patient_id $ claim_number $; datalines; P1 C1 P1 C2 P1 C3 ; run; proc sql; select patient_id,count(distinct claim_number) - 1 as cnt from claims_data group by patient_id having cnt > 0; quit; Working: SQL procedure above will give patient wise count of distinct claim numbers from the...

math,statistics,probability,frequency

No, each flip is independent of subsequent flips. You can get X heads in a row and the probability of a fair coin coming up heads the next time is still 0.50. You might want to read this: http://stats.stackexchange.com/questions/21825/probability-over-multiple-blocks-of-events...

algorithm,optimization,dynamic-programming,frequency

There is no need for Dynamic Programming for this problem. It is a simple sorting problem, with a slight twist. Find frequency / length for each file. This gets you, how frequent is each unit length access for the file. Sort these in descending order since each file is "penalized"...

r,table,frequency,categorical-data,contingency

I think you can speed this up by writing a bit more precise function and then using aggregate to get the results. You could also use by if you want a more list-based approach, which might be more useful for your next use. I think it will still be slow,...

ios,parse.com,nsdateformatter,frequency

In practice, I find that a fixed interval makes more sense in more cases than a calendar milestone. With that, and very little NSDate logic, I can have my model guarantee that its data is at most N seconds out of date. To do this, I have the model singleton...

matlab,frequency,boxplot,summary

To answer your second question: Yes, you can perform a boxplot on a 5-number summary. Sort of. I mean, it just comes down to the fact that the min/Q1/median/Q3/max of your 5-number summary will be exactly those 5 numbers. So you can just call boxplot on the summary statistics, though...

php,arrays,xml,sorting,frequency

Your $fields is actually going to be the whole SimpleXMLElement inside of your foreach loop. Your arrays will sort as expected if you use this instead: array_push($titles_array, (string)$fields); To count the occurrences, create another array: $titles_count = array(); Then in your loop, do something like this: if (isset($titles_count[(string)$fields])) { $titles_count[(string)$fields]...

Try this: x <- 11:16 y <- c(10,9,11,14,10,6) barplot(y, names=x, main="Some fancy title") ...

So If I understand you correctly.. below code would be your solution. Okay You have a list which is having a strings (terms or words) which are sorted in alphabetical Order. // Okay the below list is already sorted in alphabetical order. List<String> dupeWordList = new ArrayList<>(wordList); To count the...

To generate the output, you could use the convenient tabulate S = [ 2 2 1 2 2 3 1 1 3 3 1 1 3 4 1 1 3 1 2 1 4 1 3 1 1 1 3 1]; idx = find(S(1:end-1,:)==3); S2 = S(2:end,:); tabulate(S2(idx)) Value Count...

r,frequency,confidence-interval,discretization

Updated after some comments: Since you state that the minimum number of cases in each group would be fine for you, I'd go with Hmisc::cut2 v <- rnorm(10, 0, 1) Hmisc::cut2(v, m = 3) # minimum of 3 cases per group The documentation for cut2 states: m desired minimum number...

Not pretty: =SUM(IF(FREQUENCY(F:F,F:F)>0,1))-IF(COUNTIF(F:F,"=0")>0,1,0) Subtracts 1 if there are any zero values in the list. I was trying to do it using an AND condition within the sum, but couldn't figure it out....

r,ggplot2,frequency,kernel-density

Your plot is doing exactly what is to be expected from your data: You plot data$value, which contains numeric values between 0 and 1, so you should expect the density curve to run from 0 to 1 as well. You plot a histogram with binwidth 0.1. Bins are closed on...

matlab,filter,fft,frequency,lowpass-filter

Short Answer: Using freqz: you only get the single-sided frequency response of the filter and hence you can only apply the filter on the single-sided spectrum of the signal, x to get the single-sided frequency spectrum of the output (filtered) signal o1: z = freqz( b, a, N/2+1, Fs); o1...

collections,task,frequency,words,ir

I humbly direct you to the wikipedia article on Zipf's Law, Formally, let: N be the number of elements; k be their rank; s be the value of the exponent characterizing the distribution. Zipf's law then predicts that out of a population of N elements, the frequency of elements of...

python,c,sockets,server,frequency

Thanks to comments to the question, a solution was found: First, the c part was relevant. When data was sent to it, it was not sending anything back. I did not get at that time it was mandatory. I guess as a result the related sockets were not closed, hence...

matlab,plot,filtering,frequency

To expand on the comment by Navan, you can use the freqz command to compute and plot the frequency response of the filter. freqz is in the Signal Processing Toolbox, so if you don't have that toolbox, you'll need another method. freqz normally makes two plots: (1) one plot of...

android,timer,frequency,audiotrack

Thanks Boggartfly but I found a much simpler solution. I can simply loop the audiotrack object. I used the following code. audioTrack.setLoopPoints(0, generatedSnd.length/4, -1); The start frame is zero. The end frame is length/4 for 16bit PCM. The negative 1 here means loop infinite times. ...

android,accelerometer,frequency,smartphone,sampling

A small googling found Cochibos work on the subject. It takes the data gathered with the Accelerometer Frequency app and reports it to the web page. Looking for were the actual sample rate is defined it seems to be intrinsically connected with the device driver. I.e. the device driver sets...

Did you try typing mtcars$row.names at the console? The way to get the row names as a vector is to use rownames(mtcars). Like this: library(wordcloud) # this requires the tm and NLP packages wordcloud(rownames(mtcars), min.freq=1) # w/o min.req=1, you get just "merc" ...

Do it on paper first - this is arguably a maths question really. The trick is to be clear about all your definitions (this is also why you're getting downvotes - those same definitions are required for people to give you detailed help). Start with the definition of a square...

python,frequency,word-count,text-analysis

Here is the code: file=open("out1.txt","r+") wordcount={} for word in file.read().split(): word = word.lower() if word.isalpha == True: if word not in wordcount: wordcount[word] = 1 else: wordcount[word] += 1 copy = [] for k,v in wordcount.items(): copy.append((v, k)) copy = sorted(copy, reverse=True) for k in copy: print '%s: %d' %(k[1],...

There are two problems here. First, while std::string is null-terminated (required in C++11, de facto in most implementations before that), you cannot access past size(). If you used string::at() directly, then you'd get hit: reference at(size_type pos); Throws: out_of_range if pos >= size() which would be true for the null...

That "kind of" is a bar graph representing the Power Spectral density of the audio signal. Your average player (eg. VLC) will list this as "spectrum visualization", or the like. There's a number of libraries that do that specifically for audio (libvisual), and you can use a lot of signal...

java,associations,frequency,corpus

Let write it in the same terms long o11 = 1210738; long o12 = 67360790; long o21 = 1871676; long N = 1024908267229L The first equation is XandY = o11 / N; X = o12 / N; Y = o21 / N; so XandY / (X * Y) is (o11...

All of your processes have some issues, though the compiler may not complain about them as loudly as the one in test. In speed_proc, you are qualifying rising_edge() with an additional comparison. I would recommend nesting if statements instead (put the comparison if inside the rising_edge() if). You're also trying...

r,ggplot2,annotations,bar-chart,frequency

Changing the order could be done by changing the factor beforehand: dat$event <- factor(dat$event, levels = names(sort(table(dat$event)))). And adding percentages works just like you did with absolute values: geom_text(stat='bin', aes(label=paste0(..count.., ", ", round(..count../sum(..count..)*100, 1), "%"))) ...

With this line: txt = open(filename).read() txt is one string. So Counter(txt) Counts each character of the string. In order to count each word of the string, you need to split it into words before the Counter: Counter(txt.split()) Where no arguments passed to split uses all whitespace...

python,dictionary,frequency,keyerror

The specific problem is that you check whether bchars[i] is in d[j], but then the key you actually use is chr(i+97). chr(i+97) is the index of the ith character in bchars, but mapped to ASCII characters starting from 'a'. Why would you want to use this as your key? I...

python,algorithm,list,frequency

This is what I came up with: from collections import Counter import random # the number of question ids I need returned to # assign to the exam needed = 3 # the "pool" of possible question ids the user has access to possible = [1,2,3,4,5] # examples of lists...

Using the power of re and Counter, this task can be easily done: In [1]: import re In [2]: s = "& how are you then? I am fine, % and i want to found some food #meat with vegetable# #tea# #cake# and #tea# so on." In [3]: re.findall(r'#([^#]*)#', s)...

You can strip all of the table's attributes with as.vector() as.vector(table(helloWorld)) # [1] 1 1 1 1 1 3 2 1 1 Alternatively (and about twice as fast), convert helloWorld to factor and use tabulate() tabulate(factor(helloWorld)) # [1] 1 1 1 1 1 3 2 1 1 ...

I think you are looking for factor: > L <- list(a, b, d, e) > A <- sort(unique(unlist(L, use.names = FALSE))) > sapply(L, function(x) table(factor(x, A))) [,1] [,2] [,3] [,4] 0 0 2 5 0 1 2 1 0 0 2 2 2 2 0 3 4 3 2 10...

android,android-camera,frequency,light,infrared

If I understand your question correctly, you want to use your phone as a spectrometer. I don't think this is possible, because the camera sensor uses 3 filters to get colored images (red, green, blue) so you can only separate this 3 channels and not a single frequency (more details...

you can achieve this via PROC SQL Select count(*)/count(distinct rid) from patients; ...

matlab,audio,frequency,sampling

You should just use the variable y and reshape it to form your split audio. For example, chunk_size = fs*0.03; y_chunks = reshape(y, chunk_size, 6000); That will give you a matrix with each column a 30 ms chunk. This code will also be faster than reading small segments from file...

r,data.frame,frequency,heatmap

Here is a base R solution in 4 lines of code. First we define a function, spl which splits the components of a comma separated string producing a vector of all the fields. eg takes two string arguments and applies spl to each of them and then creates a grid...

excel-formula,excel-2010,frequency,excel-2013,countif

no need to formula, select the columns in which you are looking for the duplicates then go to your DATA tab in excel, and click on remove duplicates in the data tools section. There you go! ...

mysql,sql,select,inner-join,frequency

You didn't specify the type of frequency, but this query calculates the number of loans per week for each book that was loaned more than once in 2014: select b.isbn , b.title , count(*) / 52 -- loans/week from loan l join copy c on c.code = l.code join book...

r,frequency,variance,frequency-distribution

One option is using data.table. Convert the data.frame to data.table (setDT) and get the var of "Value" and sum of "Count" by "Group". library(data.table) setDT(df1)[, list(GroupVariance=var(rep(Value, Count)), TotalCount=sum(Count)) , by = Group] # Group GroupVariance TotalCount #1: A 2.7 5 #2: B 4.0 4 a similar way using dplyr is...

Another approach using ave: cbind(x, newCol = ave(x, cumsum(c(FALSE, (as.logical(diff(x)))[-1])), FUN = function(i) seq_along(i) * i)) The result: ..1 newCol 2014-08-26 0 0 2014-08-27 1 1 2014-08-28 1 2 2014-08-29 0 0 2014-08-30 0 0 2014-08-31 1 1 2014-09-01 1 2 2014-09-02 1 3 2014-09-03 1 4 2014-09-04 0 0...

Here's a way to get your first column using apply: # Use a list of the classifier names to make sure you're only # counting their votes classifier.names <- names(results.allm) # Apply over each row (MARGIN = 1) results.allm$consensus <- apply(results.allm[classifier.names], MARGIN = 1, FUN = function(x) { # If...

This should do what you want: Array.prototype.byCount= function(){ var itm, a= [], L= this.length, o= {}; for(var i= 0; i<L; i++){ itm= this[i]; if(!itm) continue; if(o[itm]== undefined) o[itm]= 1; else ++o[itm]; } for(var p in o) a[a.length]= {item: p, frequency: o[p]}; return a.sort(function(a, b){ return o[b.item]-o[a.item]; }); } Test: var...

scala,recursion,case,frequency

Cons operator (::) is an infix operator so if you want to get a type of List[T] and not List[List[T]] then you should write freq(c, y.filter(_ == c),(count(c,y),c)) :: list) ...

I think you want to group on both 'Type' and 'Name': print df.groupby(['Type','Name']).size() Type Name Bird Flappy Bird 1 Pigeon 2 Pokemon Jerry 3 Mudkip 2 Or if it is important to have the column named 'Frequency', you could do something like the following: print df.groupby(['Type','Name'])['Type'].agg({'Frequency':'count'}) Frequency Type Name Bird...