machine-learning,neural-network,point-clouds

An RBF network essentially involves fitting data with a linear combination of functions that obey a set of core properties -- chief among these is radial symmetry. The parameters of each of these functions is learned by incremental adjustment based on errors generated through repeated presentation of inputs. If I...

matlab,opencv,cluster-analysis,point-clouds

This example looks like the very scenario DBSCAN was designed for. Lots of noise, but with a well understood density, and clusters with much higher density but arbitrary shape.

axis,pca,point-cloud-library,ros,point-clouds

Orientations are represented by Quaternions in ROS, not by directional vectors. Quaternions can be a bit unintuitive, but fortunately there are some helper functions in the tf package, to generate quaternions, for example, from roll/pitch/yaw-angles. One way to fix the marker would therefore be, to convert the direction vector into...

binary,file-format,point-clouds,data-formats

From what I can tell, the PLY format allows arbitrary length lists of attributes attached to each element and can have either ASCII or "binary" format: http://www.mathworks.com/matlabcentral/fx_files/5459/1/content/ply.htm As far as I know, PCD only allows a fixed number of fields: http://pointclouds.org/documentation/tutorials/pcd_file_format.php...

matlab,image-processing,matlab-cvst,point-clouds,stereo-3d

For the 1600 x 3 sized reshaped A, you can use this - A(~any(isinf(A) | isnan(A),2),:) If the number of rows to be removed is a small number, you can directly remove them instead for a better performance - A(any(isinf(A) | isnan(A),2),:) = []; ...

c++,point-cloud-library,point-clouds

If you need a rotation matrix representing an orientation, we can choose the axis in which the volume distribution of the object is highest (normalised first eigenvector - that is the eigenvector associated with the largest eigenvalue) as the first column of the matrix. For the 2nd column of the...

javascript,three.js,point-clouds

You can do someting like this for meshes, I think it works for point clouds, too. var vertices = mesh.geometry.vertices; for ( var i = 0; i < vertices.length; i++ ) { var vertex = vertices[ i ].clone(); vertex.applyMatrix4( mesh.matrixWorld ); // do something with the transformed vertex } ...

This could be one approach and must be efficient enough for a wide range of datasizes - D = pdist(A); Z = squareform(D); %// Get distance matrix N = size(A,1); %// Store the size of the input array for later usage Z(1:N+1:end) = Inf; %// Set diagonals as Infinites as...

algorithm,data-structures,graph-algorithm,nearest-neighbor,point-clouds

Your problem is part of the topic of Nearest Neighbor Search, or more precisely, k-Nearest Neighbor Search. The answer to your question depends on the data structure you are using to store the points. If you use R-trees or variants like R*-trees, and you are doing multiple searches on your...

python,optimization,numpy,computer-vision,point-clouds

I can't check it because I don't have your data but the following code should do the job def create_point_cloud_vectorized(self,depth_image): im_shape = depth_image.shape # get the depth d = depth_image[:,:,0] # replace the invalid data with np.nan depth = np.where( (d > 0) & (d < 255), d /256., np.nan)...

visual-c++,3d,point-cloud-library,depth-buffer,point-clouds

1) Changing the OpenNI grabber to use your own ToF camera will be much more work than to just use the example from the camera in a loop shown below. 2) Yes PCL can show point cloud image and depth without accessing a .pcd file. What the .pcd loader does...

3d,mesh,maya,point-clouds,meshlab

The Point Cloud Library has a couple of different command-line tools for turning meshes into point clouds, as far as I know by rendering the object into points from a set of views and combining the renderings. e.g. pcl_mesh2pcd, pcl_mesh_sampling

collision-detection,point-cloud-library,point-clouds

The fcl (Flexible Collision Library) library can do fast collision detection. Here are the supported different object shapes: sphere box cone cylinder mesh octree (optional, octrees are represented using the octomap library http://octomap.github.com) I assume your point clouds are samples drawn from the surface of objects that occupy a volume...

point-cloud-library,point-clouds

In pcl the registration by itself does changes neither clouds. The result of registration is the transformation from the frame of source cloud to the frame of target cloud. In your link the registration is done in the pairAlign(). It incrementally runs points_with_normals_src = reg_result; reg.align(reg_result); each time getting as...

kinect,point,point-cloud-library,point-clouds

A point cloud is a data structure. It is basically an array/a vector of points (each containing x,y,z coordinates for each point and possibly more information). Depth data is the information about depth taken from a sensor that the point cloud can express. There are other data structures with which...

What you want to do is: Feature point detection: Find points on the surface of a point cloud that have a very unique and descriptive neighbourhood. Feature estimation: For these points and their neighbours (usually in spherical radius R) compute a descriptor. This can be a histogram, a simple value...

opengl,3d,mesh,point-clouds,surface

I have now created the point cloud of the shoe but have no idea how to construct the surface of it. How did you construct the point cloud? 3D scan? If so, there are a lot of programs specifically designed for surface reconstruction from point clouds. I suggest you...

c++,image-processing,computer-vision,point-clouds

If the point cloud data comes from a depth sensor, then you have a relatively dense sampling of your walls. One thing I found that works well with depth sensors (e.g. Kinect or DepthSense) is a robust version of the RANSAC procedure that @MartinBeckett suggested. Instead of picking 3 points...

c++,point-cloud-library,point-clouds

You will have to touch each point once to see if it lives in the slice or not. Building a kd-tree will have higher computational complexity than this, so there is no point building it. This problem is made easier by the fact that the point cloud is axis-aligned....

registration,matching,point-cloud-library,point-clouds

This is also closely related to object recognition, which in the world of 3D model-based perception is closely tied to pose estimation. One issue that frequently pops up in object recognition is what scales at which to ignore features; calculating features at a larger scale is a way to get...

point-cloud-library,point-clouds

Since you want to increase the number of points while preserving shape/structure of the cloud, I think you want to do something like 'upsampling'. Here is another SO question on this. The PCL offers a class for bilateral upsampling. And as always google gives you a lot of hints on...

Create separate uniforms and attributes for each material (which will also simplify attribute handling) If I'm not mistaken, your attributes and uniforms are global variables, and you're trying to share the attributes between the materials. Sharing uniforms works (unless you want them to be different of course), but attributes have...

You can directly change its's value pointclouds[0].geometry.colors=... and after that pointclouds[0].geometry.colorsNeedUpdate=true. To set each point's color just set the colors's children's value like pointclouds[0].geometry.colors[22]=new THREE.Color("rgb(255,0,0)");. see this:http://jsfiddle.net/aboutqx/dg34sbsk/2/ .click and you will see the color of one point changes....

c#,sharpdx,point-clouds,helix-3d-toolkit,helix

Usually, this is done by setting the Colors property in the PointGeometry3D object of your PointGeometryModel3D. You have to build the Geometry on your own. Create the render positions Create the colors Tell the renderer the order of your position and colors (List indices in Positions/Colors) //create PointGeometryModel3D object PointGeometryModel3D...