mysql,sql,database,performance
I think you'll find the following information enlightening. https://bugs.mysql.com/bug.php?id=32308 http://dev.mysql.com/doc/refman/5.0/en/type-conversion.html Notably, the documentation clearly states (emphasis mine): For comparisons of a string column with a number, MySQL cannot use an index on the column to look up the value quickly. If str_col is an indexed string column, the index cannot...
To solve this question you need to learn "Sieve of Eratosthenes". First, get the idea of how it works from here. But, this is not enough to solve this question. Since, the complexity of the algorithm is O(n.log(log(n))). Therefore, if we put n = 1000000000. It will surely fail to...
android,performance,storage,android-file
You call getRootDirectory() to get the system directory. If you want the true root of all folders, it's always just "/". So just do new File("/");
Except for trivial queries, there is no way to know if you have the optimal query & indexes. Nor can you get a metric for how well designed the schema and application are. 3 seconds on a cold system for a 3-way JOIN with "Rows" of 409, 45, 1 is...
android,performance,opengl-es,opengl-es-2.0
I think you're after a particle system. A similar question is here: Drawing many textured particles quickly in OpenGL ES 1.1. Using point sprites is quite cheap, but you have to do extra work in the fragment shader and I'm not sure if GLES2 supports gl_PointSize if you need different...
javascript,angularjs,performance,caching,angularjs-ng-repeat
Well if you disable the scope of the scope that the ng-repeat is on. Then it will no longer render. It essentially becomes static content. This allows you to actually control when it is rendered. ux-datagrid actually uses this concept to turn off dom that is out of view so...
xml,r,performance,xml-parsing,data.frame
I was able to get better performance by using xpath expressions to extract the information you want. Each of the calls to xpathSApply takes ~20 seconds on my laptop, so all the commands complete in less than 2 minutes. # you need to specify the namespace information ns <- c(soap="http://schemas.xmlsoap.org/soap/envelope/",...
sql-server,database,performance
The best thing to do would depend on what other fields the table has and what other queries run against that table. Without more details, a non-clustered index on (code, company, createddate) that included the "price" column will certainly improve performance. CREATE NONCLUSTERED INDEX IX_code_company_createddate ON Products(code, company, createddate) INCLUDE...
android,performance,sqlite,compare
In your case needs value the situation!! When I have this problems the first questions are... The tables that I want to do 'drop table' have few data? If it's yes the best way is the 'drop table' command. If it's no then you need to use the 'insert' and...
You need a composite index for posts_to_tribes: INDEX(tribe_id, post_id). The GROUP BY was to compensate for the JOIN exploding the number of rows. Here is a better workaround, than IN ( SELECT ... ): SELECT p.post_id, p.date_created, p.description, p.last_edited, p.link, p.link_description, p.link_image_url, p.link_title, p.total_comments, p.total_votes, p.type_id, p.user_id FROM posts p...
There is no built-in facility to do what you want, at least in PostgreSQL. Doing it effectively would require signifciant changes to how data is stored, as currently each row is independent of all other rows (well, except TOAST pointers for out-of-line stored data that's unchanged in an UPDATE). A...
c++,performance,stack,heap-memory
This has obviously nothing to do with "writing vs overwriting". Assuming your results are indeed correct, I can guess that your "faster" version can be vectorized (i.e. pipelined) by the compiler more efficiently. The difference in that in this version you allocate a storage space for temp, whereas each iteration...
I really don't think you need to be concerned about the performance of simply using the class selector. I think the bigger issue with what you asked is that the two selectors will potentially select different elements. The following selector: $('#id').find('.class') is going to find all descendant elements with the...
There won't be any difference, since you've only changed the scope of the variables. Since you're not using the variables outside of the scope, the generated bytecode will be identical as well (you can try it out with javap). So use the second style for clarity. Edit: In fact if...
Even if hash lookup in Javascript is O(1) (I don't know for sure), you'd still have the lookup operations overhead. So, yes, this is suboptimal for a big loop.
Calculations (with the parameter get_as_float as true) will give you results in seconds, according to PHP documentation. By default, microtime() returns a string in the form "msec sec", where sec is the number of seconds since the Unix epoch (0:00:00 January 1,1970 GMT), and msec measures microseconds that have elapsed...
ios,objective-c,performance,sprite-kit,tiled
With tile maps, each tile is usually represented by a SKSpriteNode. So if your map is 320 x 320 and you're using 32x32 tiles, you will end up with 100 nodes. Using 16x16 tiles on the same map size will result in 400 nodes. The more nodes, the greater the...
You can learn more about optimization of For-In loops, at the following link Optimization killers in Node.js It is the same case for google chrome javascript, Node.js and Chrome implements V8 Javascript engine...
mysql,sql,database,performance,csv
You can import the CSV file into a separate table using mysql LOAD DATA INFILE and then update the entries table using JOIN statement on the basis of similar column name. E.g: update entries a inner join new_table b on a.name = b.name set a.address = b.address ; Here new_table...
The bottleneck here is actually your for loop. Python for loops are relatively slow, so if you need to iterate over a million items, you can gain a lot of speed by avoiding them altogether. In this case, it's quite easy. Instead of this: for item in range(n): if ((s1[item])**2...
javascript,jquery,arrays,performance,jquery-ui
test.length and count=ui.value won't be the same except in the edge-case that the user only moves one step at a time. The slide event returns the current mouse position, if the user skips straight to the end, ui.value will be gCoordsArray.length while test.length == 0 One solution might be to...
A format function that can return many different formats, can be expected to be quite slow. If you are happy with lubridate's year function, you could just use its (very simple) code: as.POSIXlt(x, tz = tz(x))$year + 1900 In general, you should avoid conversions between any types/classes and characters when...
java,performance,serialization
Hi (i am author of FST): I think the test is flawed: When running your sync (no queues + thread context switches) test in a loop (=proper warmup) I get a Mean of 0.7 micros and Max outlier of 14 micros (doubled number of elements in map though) storing a...
c,performance,opencv,optimization,sse
As harold said, delta is used to make unsigned comparsion. Let's describe this implementation by steps: __m128i x0 = _mm_sub_epi8(_mm_loadu_si128((const __m128i*)(ptr + pixel[0])), delta); __m128i x1 = _mm_sub_epi8(_mm_loadu_si128((const __m128i*)(ptr + pixel[4])), delta); __m128i x2 = _mm_sub_epi8(_mm_loadu_si128((const __m128i*)(ptr + pixel[8])), delta); __m128i x3 = _mm_sub_epi8(_mm_loadu_si128((const __m128i*)(ptr + pixel[12])), delta); m0 =...
ruby-on-rails,ruby,performance
this should help a little: ecommerce_array = Ecommerce.where(legacy_id: legacy_id, company: company) if ecommerce_array.any? historical_interest = ecommerce_array.pluck(:interest) return unless interest == historical_interest ecommerce_array.update_all(....) else #.... EDIT: change this historical_interest = ecommerce_array.pluck(:interest) into this historical_interest = ecommerce_array.collect(&:interest)...
ruby-on-rails,ruby,database,performance,model
Assuming you have properly set relations # user.rb class User has_many :upvotes end we can load comments, current user and his upvotes: # comments_controller.rb def index @comments = Comment.limit(10) @user = current_user user_upvotes_for_comments = current_user.upvotes.where(comment_id: @comments.map(&:id)) @upvoted_comments_ids = user_upvotes_for_comments.pluck(:comment_id) end And then change if condition in view: # index.html.erb <%...
Is this what you want? insert into mytable(name, uuid, . . .) select name, uuid, . . . from mytable where x between $xmin and $xmax and y between $ymin and $ymax; If it is something like this, then you only need one query, just the right conditions....
I would use the if-else case you have now, but use a StringBuffer instead of a case for each different tuple StringBuffer sb = new StringBuffer(); if(radio1.isChecked()) { sb.append("radio1"); } if(radio2.isChecked()) { sb.append("radio2"); } if(checkbox1.isChecked()) { sb.append("checkbox1"); } if(checkbox2.isChecked()) { sb.append("checkbox2"); } textview.setText(sb.toString()); this way you only need 1 case...
You don't need the outer loop as one improvement: func getFields(filter map[string]map[string]bool, msg *Message) (fs []Field) { if fieldFilter, ok := filter[relationString(msg)]; ok { for _, f := range msg.Fields { if _, ok := fieldFilter[f.Name]; ok { fs = append(fs, f) } } } } return } ...
mysql,performance,database-design,relational-database,database-schema
What happens when you start getting more than single digit IDs? What does 1111 equate to? 1 to 111, 111 to 1 or 11 to 11? Stick with individual fields. It will be easier to interpret, manage and scale. ...
java,arrays,performance,memory,collections
Your theory is entirely possible. It does take the ArrayList a while to shrink the size of the internal array used to store the references. You can avoid that effect by using another List implementation like LinkedList that doesn't show this behavior, but those also have considerable memory overhead that...
The servlet api provides streams on the request and response. The input stream on the request will only load bytes into the JVM as required. The stream does not hold all the data in memory, a call to read() eventually gets some data out of a small buffer or causes...
javascript,performance,latency
Complete answer: Both. Loading it off of the web will benefit you in a couple of ways: 1) There is a limit to the number of maximum open HTTP requests a browser can have. However, this limit is per domain. So reaching out to google's servers will not prevent you...
One thing I'd consider would be transforming it in place if you don't use your str for anything else. That way you write back to the same location you read from and might get better caching behaviour. Simply change std::transform(str.begin(), str.end(), std::back_inserter(out), decrement); to std::transform(str.begin(), str.end(), str.begin(), decrement); and you...
javascript,performance,reactjs,reactjs-flux,flux
Yes, that is okay for a real app. It is typical to emit whole objects from stores even if it just so happens one of the listeners only needs a subset of the data. The idea is to keep it simple and avoid having to change the store when what...
c#,performance,list,memory-management,clone
if (destination.Capacity < source.Capacity) { /* Increase capacity in destination */ destination.Capacity = source.Capacity; } This is probably wrong... The source.Capacity could be bigger than necessary... And you are "copying" the elements already contained in destination to a "new" destination "buffer". This copy is unnecessary because then the elements of...
javascript,asp.net,performance,code-timing
I would use Glimpse. It is open source and quite good. You can get a view of asp.net app performance at different levels. glimpse...
I've tried with the sample page: http://thegrubbsian.github.io/jquery.ganttView/example/index.html and the script that you posted doesn't showed any problem (these are only 17 childs). Generally I've used jQuery to retrieve list of thousand or ten thousands elements without any problem. Are you really sure that it's this line to freeze your browser...
You can use logarithms to find the magnitude of the number: var x = 0.00195; var m = -Math.floor( Math.log(x) / Math.log(10) + 1); document.write(m); // outputs 2 (Later versions of JavaScript have Math.log10.)...
performance,sparql,modeling,ontology,sesame
With respect to the modeling question, I'd like to offer a fourth alternative, which is, in fact, a mix of your options 1 and 2: introduce a separate class (hierarchy) for these 'excluded/missing' symptoms, diseases or treatments, and have the specific exclusions as instances: :Exclusion a owl:Class . :ExcludedSymptom rdfs:subClassOf...
java,multithreading,performance,infinite-loop
You might want to consider putting the thread to sleep and only waking it up only when your 'ball' variable becomes true. There are multiple ways of doing this, from using the very low level, wait and notify statements to using the java.util.concurrent classes which provide a less error prone...
ruby-on-rails,ruby,performance
Let's say you want to work on 1 million records in your database. First, your database needs to load and send 1 million records to your Ruby application. Then Rails needs to parse those 1 million record (this uses memory) then generate 1 million record and a big array to...
objective-c,osx,performance,cocoa,nsview
Have you profiled your App? Before ripping your view hierarchy apart, use instruments with the time profiler to find out where the time is actually being spent. CALayers are more efficient than UIViews, and it is recommended to avoid using drawRect if you don't need to, but before resorting to...
It may, but switching solvers or building a specialized tactic will probably have a greater influence.
android,performance,android-camera,preview,android-textureview
From a performance perspective, SurfaceView is the winner. With SurfaceView, frames come from the camera and are forwarded to the system graphics compositor (SurfaceFlinger) with no copying. In most cases, any scaling will be done by the display processor rather than the GPU, which means that instead of scanning the...
performance,mongodb,delphi,mapreduce
Phew what a question! First up: I'm not an expert at MongoDB. I wrote TMongoWire as a way to get to know MongoDB a little. Also I really (really) dislike when wrappers have a plethora of overloads to do the same thing but for all kinds of specific types. A...
algorithm,performance,hash,hashtable,hopscotch-hashing
It says find an item whose hash value lies between i and j, but within H-1 of j. It doesn't say find an item whose current location lies between i and j, but within H-1 of j. The d at index 3 has a hash value of 1, which doesn't...
python,performance,dictionary,3d,pygame
Pygame doesn't have the ability to do this natively. If you really want this, you'll need to brush up on your trigonometry to map lines from the 3D space to the 2D screen. At that point, you'll essentially be re-implementing a 3D engine.
performance,hadoop,split,mapreduce
dfs.block.size is not alone playing a role and it's recommended not to change because it applies globally to HDFS. Split size in mapreduce is calculated by this formula max(mapred.min.split.size, min(mapred.max.split.size, dfs.block.size)) So you can set these properties in driver class as conf.setLong("mapred.max.split.size", maxSplitSize); conf.setLong("mapred.min.split.size", minSplitSize); Or in Config file...
c,multithreading,performance,matrix-multiplication,simd
Here are the times I get building on your algorithm on my four core i7 IVB processor. sequential: 3.42 s 4 threads: 0.97 s 4 threads + SSE: 0.86 s Here are the times on a 2 core P9600 @2.53 GHz which is similar to the OP's E2200 @2.2 GHz...
sql,sql-server,performance,tsql
What is difference between that? AFAIK, both are doing INNER JOIN, First one using a Implicit JOIN syntax whereas the second one using a explicit join syntax I wouldn't expect any performance difference between them but the second style of query using explicit join syntax is much more recommended...
c#,performance,linq,tsql,database-performance
Your foreach is like a Select. var ProductCodes = db.tbl_ProdCodeValues.Where(x=>x.productID == _Product.productID); var withMatches = ProductCodes .Select(code => new { code, matches = db.InventoryCodeValues.Any(x => x.InventoryValue.ToLower().Contains(code.ProdCodeValue)) }); And now all of this remotes to the database. Look at the query plan to see whether this is acceptable already or whether...
You have a couple of options, but the easiest is just to mark the questions as inactive as soon as their author becomes inactive. Here's an example deleteUser method: Meteor.methods({ deleteUser: function() { // mark the user as inactive Meteor.users.update(this.userId, {$set: {isInactive: true}}); // mark the user's questions as inactive...
Combining ideas from the various answers with some extra bithacks, here is an optimized version: #include <errno.h> #include <stdint.h> #include <stdio.h> #include <string.h> #include <unistd.h> #define BUFFER_SIZE 16384 #define REPLACE_CHAR '@' int main(void) { /* define buffer as uint64_t to force alignment */ /* make it one slot longer to...
javascript,node.js,performance,websocket,socket.io
You can actually improve performance in three ways : 1 - Reduce the size of instructions you send on the socket. You can achieve this by : Minimify your JSON Use MessagePack which is smaller than JSON 2 - Send just what you need. If you need to move the...
If you let your program run through a debugger (in my case: gcc -o collatz -g collatz.c gdb --args collatz 2000000 $run ...segfault at memset(lengths,0,sizeof(lengths))... ), you'll see that your segfault happens when you try to access lengths! The point is that you're doing dynamic memory allocation for arrays wrong;...
performance,pattern-matching,ocaml
match and if has different semantics: match is parallel, if is strictly sequential. If you have expression: match expr with | A -> e1 | B -> e2 | C -> e3 | ... Then it may compare branches in any order. In the example, I provided, it may compile...
java,performance,comparison,equality,cpu-architecture
Assuming those operations are JITted into x86 opcodes without any optimization, there is no difference. A possible x86 pseudo-assembly snippet for the two cases could be: cmp i, 1 je destination and: cmp i, 0 jg destination The cmp operation performs a subtraction between the two operands (register i and...
sql-server,r,database,performance,rodbc
I would try RJDBC http://cran.r-project.org/web/packages/RJDBC/RJDBC.pdf with these drivers https://msdn.microsoft.com/en-us/sqlserver/aa937724.aspx library(RJDBC) drv <- JDBC("com.microsoft.sqlserver.jdbc.SQLServerDriver","/sqljdbc4.jar") con <- dbConnect(drv, "jdbc:sqlserver://server.location", "username", "password") dbGetQuery(con, "select column_name from table") ...
Intro This is a numeric variation of the string algorithm we implemented in the other answer. It is faster and does not require either creating or sorting the pool. Algorithm Outline We can use integer numbers to represent your binary strings, which greatly simplifies the problem of pool generation and...
The short answer is yes, there is a performance hit for loading anything, so if you don't need it, don't load it. The longer answer is that you are unlikely to get enough traffic that this is going to be a major factor in the development of the site, but...
Ubuntu has moved from to Upstart to Systemd in version 15.04 and no longer respects the limits in /etc/security/limits.conf for system services. These limits now apply only to user sessions. The limits for the MySQL service are defined in the Systemd configuration file, which you should copy from its default...
r,performance,matrix,cluster-analysis,sparse-matrix
I've written some Rcpp code and R code which works out the binary/Jaccard distance of a binary matrix approx. 80x faster than dist(x, method = "binary"). It converts the input matrix into a raw matrix which is the transpose of the input (so that the bit patterns are in the...
Files.list() is a O(N) operation whereas sorting is O(N log N). It is far more likely that the operations inside the sorting which matter. Given the comparisons don't do the same thing, this is the most likely explanation. There is a lot of files with the same modification date under...
microtime(true) is your PHP friend. If there is an index starting with gameid, "Approach 1" might be faster, since it can do the COUNT(*) in the index. On the other hand, there are two things that count against it: 3 queries is usually slower than 2. If there are a...
I did some benchmarking against three methods. I used an external file for reading (instead of __DATA__). The file consisted of 3 million lines of the exact data you were using. The methods are slurping the file, reading the file line-by line, and using Storable as Sobrique mentioned above. Each...
javascript,performance,comparison,undefined,typeof
It makes absolutely no useful difference either way in this case. The typeof operator returns a string indicating the type of the unevaluated operand. https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/typeof We know it is always going to be a string, and it will only be that of a few predefined values, so there is no...
python,performance,numpy,matrix,comparison
Few approaches with broadcasting could be suggested here. Approach #1 out = np.mean(np.sum(pattern[:,None,:] == matrix[None,:,:],2),1) Approach #2 mrows = matrix.shape[0] prows = pattern.shape[0] out = (pattern[:,None,:] == matrix[None,:,:]).reshape(prows,-1).sum(1)/mrows Approach #3 mrows = matrix.shape[0] prows = pattern.shape[0] out = np.einsum('ijk->i',(pattern[:,None,:] == matrix[None,:,:]).astype(int))/mrows # OR out = np.einsum('ijk->i',(pattern[:,None,:] == matrix[None,:,:])+0)/mrows Approach #4...
android,performance,navigation,navigation-drawer
You should not close the Drawer when resuming the Activity: @Override public void onResume(){ mDrawerLayout.closeDrawer(Gravity.START); //this should have been done before super.onResume(); } Instead you should close it when starting an Activity from the drawer: @Override public void onClick(View view, int position) { switch(position){ case 0 : { Intent myIntent...
c,performance,optimization,alphablending,lookup-tables
Table lookup is not a panacea. It helps when the table is small enough, but in your case the table is very big. You write 16 megabytes used for the table is not an issue in this case which I think is very wrong, and is possibly the source of...
r,performance,data.frame,paste,rcpp
This matches your description, but not the output you show: mat = as.matrix(d) matrix(paste0(mat[, seq(1, ncol(mat), by = 2)], mat[, seq(2, ncol(mat), by = 2)]), ncol = ncol(mat) / 2) # [,1] [,2] [,3] # [1,] "11" "44" "23" # [2,] "72" "79" "75" # [3,] "85" "38" "12" #...
Since stakx does not seem to be coming back to my question to provide an official answer (and get the credit) so I will do it for him: As stakx had hinted at, I didn't log tail calls. In fact, I wasn't even aware of the concept so I had...
A "compound index" which is the correct term for your "link" does not create any performance problems on "read" ( since writing new entries is obviously more information ) than an index just on the single field used in the query. With one exception. If you use a "multi-Key" index...
There is a chance that there will be no backing char[] array at all in Java 9 version of String, see JEP 254. That is, toCharArray() will be your only option. Generally you should never use Unsafe APIs unless you are absolutely sure it is neccessary. But since you...
asp.net-mvc,performance,razor,using-directives
Actually, i don't think that it can lower the performances of the website. It may make the compilation a bit longer (A little bit), but after that, your website is compiled. And, i think you'll loose much more "performances" by all the treatment you'll have in your controllers ;)...
c#,performance,matrix,jagged-arrays
You could remove the conditional in the method and increase memory usage to increase access performance like so: var dm = new double[size][]; for (var i = 0; i < size; i++) { dm[i] = new double[size]; for (var j = 0; j < i+1; j++) { dm[i][j] = distance(data[i],...
java,performance,tomcat6,ojdbc
For inexplicable reasons however, this morning the performance increased and my problem is no more. I have no idea why. I have no authority over the server, maybe someone changed something.
python,performance,loops,cmd,progress
You can get a total average number of events per second like this: #!/usr/bin/env python3 import time import datetime as dt start_time = dt.datetime.today().timestamp() i = 0 while(True): time.sleep(0.1) time_diff = dt.datetime.today().timestamp() - start_time i += 1 print(i / time_diff) Which in this example would print approximately 10. Please note...
oracle,performance,oracle11g,oracle10g,database-performance
You are comparing a varchar column with a numeric literal (245643). This forces Oracle to convert one side of equality, and off hand, it seems as though it's choosing the "wrong" side. Instead of having to guess how Oracle will handle this conversion, use a character literal: SELECT * FROM...
java,performance,design,architecture,data-modeling
It totally depends on what kind of data you are working with and what kind of searches you want to perform on it. For example, with hash based structures you can not support partial word searches. You could go for an in-memory relational db if your data is really relational...
You need nothing to speed up sqrt for 32-bit values. HotSpot JVM does it automatically for you. JIT compiler is smart enough to recognize f2d -> Math.sqrt() -> d2f pattern and replace it with faster sqrtss CPU instruction instead of sqrtsd. The source. The benchmark: @State(Scope.Benchmark) public class Sqrt {...
I'd go for Dictionary<int, List<object>>, this guarantees a fast lookup when selecting by ID. Dictionary keys have to be unique, but the List allows you to put more than one object under the same key.
mysql,sql,performance,join,subquery
Whatever comments you supply to this answer, I will continue with helping try revision for you, but was too much to describe in a comment to your original post. You are looking at services within a given UNIX time range, yet your qualifier on service_id in sub-selects is looking against...
r,performance,matlab,vectorization,reshape
The first step is to convert your array w from 6x9 to 3x3x6 size, which in your case can be done by transposing and then changing the dimension: neww <- t(w) dim(neww) <- c(sqrt(somPEs), sqrt(somPEs), inputPEs) This is almost what we want, except that the first two dimensions are flipped....
arrays,database,performance,postgresql,indexing
GIN and GiST indexes are generally bigger than a simple b-tree, and take longer to scan. GIN is faster than GiST at the cost of very expensive updates. If you store your tags in an array column then any update to the row will generally require an update to the...
c#,multithreading,performance,loops
What I would do is something like this: int openThread = 0; ConcurrentQueue<Type> queue = new ConcurrentQueue<Type>(); foreach (var sp in lstSps) { Thread worker = new Thread(() => { Interlocked.Increment(ref openThread); if(sp.TimeToRun() && sp.HasResult) { queue.add(sp); } Interlocked.Decrement(ref openThread); }) {Priority = ThreadPriority.AboveNormal, IsBackground = false}; worker.Start(); } //...
javascript,jquery,arrays,performance,underscore.js
Basically combine() takes an array with the values to combine and a size of the wanted combination results sets. The inner function c() takes an array of previously made combinations and a start value as index of the original array for combination. The return is an array with all made...
The problem is that you are measuring two different things.164 ms is the time that the database spent executing the query. I suspect the 824 ms that you measured is query execution + instantiation of your Entity objects.
sql-server,performance,timestamp
My guess is that you also have an index on MachineName - or that SQL is deciding that since it needs to group by MachineName, that would be a better way to access the records. Updating statistics as suggested by AngularRat is a good start - but SQL often maintains...
javascript,performance,for-loop
Yes, something has changed since the article was released. Firefox has gone from version 3 to version 38 for one thing. Mostly when a new version of a browser is released, the performance of several things has changed. If you try that code in different versions of different browsers on...