Is there a way to convert a *world point* into **normalized viewport** position ? How can I do that ? I saw how to convert world position to viewport, but not *normalized viewport* ... can you help me ? Thank you.

Is there a way to convert a *world point* into **normalized viewport** position ? How can I do that ? I saw how to convert world position to viewport, but not *normalized viewport* ... can you help me ? Thank you.

Solution:

-just change vtkRenderer with your renderer and vtkViewport with yourviewport

```
void WorldToView(double &x, double &y, double &z)
{
double coordinates[3];
coordinates[0]=x;
coordinates[1]=y;
coordinates[2]=z;
vtkRenderer->SetWorldPoint(coordinates);
vtkRenderer->WorldToView();
vtkRenderer->GetViewPoint(coordinates);
x = coordinates[0];
y = coordinates[1];
vtkViewport->ViewportToNormalizedViewport(x,y);
}
```

You can use conditional plotting. splot 'data.txt' u ($1==6 ? $2:1/0):3:4 title 'At-no 6' w points pt 7, \ 'data.txt' u ($1==7 ? $2:1/0):3:4 title 'At-no 7' w points pt 7, \ 'data.txt' u ($1==1 ? $2:1/0):3:4 title 'At-no 1' w points pt This creates ...

matlab,image-processing,plot,3d,surface

Setting those values to NaN should do. Here's an example: [x, y] = ndgrid(linspace(-1,1,500)); z = cos(2*pi*(x+y)*2); z(x.^2+y.^2>1) = NaN; %// remove values outside unit circle surf(x,y,z,'edgecolor','none') colorbar view(2) axis equal ...

Theoretically normals are not really vectors, they are really best though of as bivectors, it just so happens that in 3D both vectors and bivector have three components so it is common to identify they two. If we lived in a four dimensional world we would not have this confusion....

arrays,image-processing,multidimensional-array,3d,wolfram-mathematica

Your Mathematica syntax is seriously wonky. And you probably shouldn't be using Do, or any other looping construct, in the first place. Let's take some baby steps before we try to walk ... According to the documentation, and my experience, Image3D[{img1,img2}] loads the list of 2D images (the list is...

c++,algorithm,3d,marching-cubes

Lots of questions. I am going to try to give some pointers. First of 200^3 is a pretty small dataset for ct! what about 1024^3? :) Marching cubes is built for regular grids. So wether data is defined at cube vertices or centers really does not matter: just shift by...

javafx,3d,transparency,transparent,javafx-3d

Transparency in JavaFX 3D shapes has been a long time request... until recently: Since JDK8u60 early access release b14, transparency is enabled in 3D shapes. You can add color with transparency as diffuse color, like in this answer. Also you can add images with some transparency level on every pixel,...

The main idea of this is to set the HWND as parent of the vtkRenderWindow Here is how to do that: C++ class: class MyRender { //attributes .... MyRender(HWND parent) { renderer = vtkSmartPointer<vtkRenderer>::New(); _render = vtkSmartPointer<vtkRenderWindow>::New(); _render->AddRenderer(renderer); interactor = vtkSmartPointer<vtkRenderWindowInteractor>::New(); interactor->SetRenderWindow(_render); //setting background renderer->SetBackground(0.1, 0.2, 0.4); _render->SetParentId(parent); } void...

The Box() constructor is meant for serialization only and doesn't initialize the mesh. The constructor in your upper example is deprecated. Use: tower = new Box(0.5f, 0.5f, 0.5f); This will create a cube of the size 1x1x1 centered at [0, 0, 0]. Also, make sure you look at the tower....

algorithm,3d,geometry,distance,computational-geometry

Find affine transform M that translates this ellipse in axis-oriented one (translation by -p and rotation to align orientation vector r and proper coordinate axis). Then apply this transform to point p and check that p' lies inside axis-oriented ellipsoid, i.e. x^2/a^2+ y^2/b^2+z^2/c^2 <= 1

Using some simulated data this should get you what you want. The key is that you have to create your bivariate bins, accomplished using the cuts() function. Then treating the binned factors as levels we can then count the combinations of each factor level using the table() function: library(plot3D) ##...

The following MAXScript function demonstrates how to calculate a box2 of the safe frame dimensions, given a point2 of the viewport size and the render size. We need to account for two separate cases: one where the differences in aspect result in spacing on the left and right, and one...

To add reflection first you need mirror light vector by surface normal all these are unit vectors: l - (yellow) to light source n - (aqua) surface normal r - (green) reflected light direction e - (orange) to eye/camera direction points: p - (red) - rendered pixel position q -...

update after further clarifications and chat The whole proint is that 3D transformatins are not commutative. This means that translating and then rotating is different that rotating and then translating (will produce different results). In some special cases these can coincide (e.g origins are at 0,0,0 and so on..), but...

windows,visual-studio-2012,dll,vtk

In VStudio, go to your application project properties, select Debugging, and in the Environment option, add PATH=%path_to_the_folder_where_your_your_dll_is_located%; (I'd suggest using relative paths).

c++,3d,directx,direct3d,directx-9

In your code (downloaded), xRes and yRes are ints. Due to integer division, yRes/xRes will be zero, because xRes > yRes. You are passing this into the D3DXMatrixPerspextiveFovLH function as the aspect ratio, which will produce an invalid matrix. Instead, cast them to floats first, before doing the division, and...

Here's a solution, but you'll have to change the test to something other than pDist1 > pDist2: segmentAngle = (math.pi * 2) / numberOfSides pRawAngle = math.atan2(y, x) pAngle = abs((pRawAngle % segmentAngle) - (segmentAngle / 2)) pEdgeAngle = pRawAngle - pAngle pDistanceToEdge = radius * (math.cos(segmentAngle / 2) *...

Are yRes and xRes declared as int? If that's the case, the division y/x may be 0. Try casting one of the variables to float. In addition, I noticed that you are calculating the aspect ratio using yRes/xRes. If you are using yRes for height and xRes for width, you...

I finally figured it out: I had to define the dimensions as variables and put the range of values as data for the dimension variables.

If you want to get the pitch, yaw and roll angles at any stage after several rotations, you can get them from the transformation matrix of the 3D model. If you have a look at the transformation matrix after several rotations: Transform T = model3D.getLocalToSceneTransform(); System.out.println(T); you'll see something like...

To update the widget, you should repaint() it, but calling repaint() directly is not very good, so try: widget.update() From doc: This function does not cause an immediate repaint; instead it schedules a paint event for processing when Qt returns to the main event loop. This permits Qt to optimize...

Generate the points in the straight position then apply the rotation (also check the origin of the coordinates).

python,performance,dictionary,3d,pygame

Pygame doesn't have the ability to do this natively. If you really want this, you'll need to brush up on your trigonometry to map lines from the 3D space to the 2D screen. At that point, you'll essentially be re-implementing a 3D engine.

Did you do step 3 and 4 here? : Combining Qt 5.4.1 with vtk 6.2.0 (using CMake GUI 3.2.1) on windows I'm guessing you didn't change VK_QT_VERSION to 5...

As you said, the new format is an xml format. If you open the file with a text editor, you should see the tags. Also, usually the extension .vtk is for legacy format, and extensions .vti, .vtp etc are for the new format ( see pag.12 of the document you...

I'm taking you at your word that you want circles, so you need to push the plot area into the upper right corner: outHalfCirc <- function(r,colr) {opar=par(xpd=TRUE, new=TRUE) #plot ouside plot area polygon(x=seq(r,-r,by=-0.1), y= -sqrt(r^2 - seq(r,-r,by=-0.1)^2) , # solve r^2 = x^2 +y^2 for y xlim =c(0,7 ), ylim=c(0,10),...

c++,opengl,3d,drag-and-drop,mouse-picking

I'm used to solving these user-interaction problems somewhat naively (perhaps not in a mathematically optimal way) but "well enough" considering that they're not very performance-critical (the user interaction parts, not necessarily the resulting modifications to the scene). For unconstrained free dragging of an object, the method you described using unproject...

You can do this using rgl package (interface to openGL) in R and the turn3d function ## Define a function (ys must be positive, we are spinning about x-axis) f <- function(x) 2*exp(-1/2*x) * (cos(pi*x) + sin(pi*x)) + 2 xs <- seq(0, 10, length=1000) plot(xs, f(xs), type="l") ## Rotate about...

I fixed it! First thing I had to do was add my buffer binding and attribute pointing into the while loop, I guess you have to tell opengl about them every time you call useProgram. The second thing was I needed to call glUniformMatrix4fv if I wanted to pass my...

I see you also created a github issue. That's the right place to ask this.

You should use labels.append instead of numpy.append Example code - labels = [] for line in lines: lab = [h/100, maxf, title] labels.append(lab) Also this would create the labels list as - [[1,34,u'te],[2,44,u've],[4,43,u'ht]] [1,34,u'te],[2,44,u've],[4,43,u'ht] is not possible in python, these are either three different lists, or they are enclosed by...

python,matplotlib,plot,3d,surface

Even if I agree with the others that meshgrids are not difficult, still I think that a solution is provided by the Mayavi package (check the function surf) from mayavi import mlab mlab.surf(Z) mlab.show()...

The following describes a way, which should work for your situation. Compute normal in billboard space For example, you could use your texture coordinates here float3 normal = float3(Tex.x*2-1, 0, Tex.y*2-1); normal.y = sqrt(normal.x*normal.x+normal.y*normal.y) Create an orthonormal transformation matrix (World -> Billboard) This matrix consists of the three normalized base...

To enable C++ in the NDK, add LOCAL_CPP_FEATURES := rtti exceptions and LOCAL_CPPFLAGS += --std=c++11 to the jni/Android.mk file. By default, the NDK supports only a C++-like language. Note that there's no underscore between CPP and FLAGS. Also, I've used += because this won't overwrite other flags such as -Wall....

A possible solution could be: to define a matrix of colour indeces: since you need 9 colours it should be a (9 x 3) matrix to use the value stored in the 5th column of your data to select the color in the color matrix (i. e. to select the...

opencv,image-processing,3d,camera-calibration

x, y image point only determines a ray from the camera centers through the image point. It has infinite number of possible z and when you multiply images point with inverse matrices you will get an equation of a ray or a line. It is impossible to get 3D from...

graphics,unity3d,3d,game-engine

If you don't use realtime shadows (it's an option, often on mobile), than you can have more or less 2 approach for dynamic objects: Use lightmap data baked into probes to approximate per-vertex lighting (no need to have realtime light). It's an approximation but can work on some contexts. Use...

The color RGB vector is multiplied by the shade coefficient (the cosine value as you initially assumed) The logarithmic scaling is done by the target imaging device and human eyes If your colors get too dark then the probable cause is: the cosine or angle value get truncated to integer...

math,matrix,3d,quaternions,rotational-matrices

vec_res = (inverse(VM) * conversion_to_matrix(q) * VM) * vec_input Is perfectly valid. The problem is... inverse(VM) * conversion_to_matrix(q) * VM is NOT equal to conversion_to_matrix(q) Therefore you have to keep the original equation in its entirety....

web,unity3d,3d,augmented-reality,unity-web-player

Try this. Furthermore, I would just uninstall everything and just reinstall with a stable version of Unity 5.

If you are referring to rotating an image during a Swing paint operation, then the correct way to do this is with an AffineTransform. Graphics2D graphic; graphic.drawRenderedImage(image, AffineTransform.getRotateInstance(Math.PI)); Unfortunately AffineTransform does not support perspective transforms (by definition a transform is only Affine if parallel lines remain parallel). For perspective transforms...

Imagine, instead of a camera, you're moving your head around in 3D space. then Camera.Position is specifying where your head is located, Camera.LookDirection determines the direction you're looking at and Camera.UpDirection shows the orientation of your head. the following pictures clear things up for you. In the first Image, Camera...

Your approach is almost correct, but you are failing at updating the info of the faces array. With every volume section you add, you add new vertices to the final mesh, so the vertices indices on the face array should be shifted accordingly. For that, keep a counter of the...

You are defining the variable material in two places, and consequently you are passing MeshBasicMaterial instead of LineBasicMaterial to the THREE.Line constructor. three.js r.71...

You missed one line on the Wikipedia page where So add this line n0 = n / norm(n); and change the final line to L = ((n0*A)-d); ...

You could do the following: library(scatterplot3d) a<-c(1:10) b<-c(1:10) c<-c(1:10) #remove x labels using x.ticklabs = '' scatterplot3d(a,b,c, main="3-D Scatterplot",color="blue", pch=19, type="h", lty.hplot=2, box=F, scale.y=.5, lty.grid=1, lab=c(9,5,1), xlab="", ylab="", zlab="", x.ticklabs='') #add the labels using the text function. srt specifies the angle. text(x=b, y=1, pos=1, labels=b, srt=45, adj=1, xpd=TRUE, offset=0.5) And...

matlab,user-interface,3d,rotation,lighting

Implement own rotation functionality You can implement an own functionality to adjust the axes and the lights at the same time. This way the light gets adjusted continuously while rotating the axes. function follow_me_1 figure axes('buttondownfcn', @buttondownfcn); % assign callback set(gca,'NextPlot','add'); % add next plot to current axis surf(peaks,'hittest','off'); %...

You can calculate the bounding box of your mesh, and scale it based on the number you get to make it the same size as other objects.

python,matplotlib,plot,3d,surface

Apparently, after some trial and error, the best/easiest thing to do in this case is to just to convert the r, theta, and z data points (defined as 2D arrays, just like for an x,y,z plot) into cartesian coordinates: # convert to rectangular x = r*numpy.cos(theta) y = r*numpy.sin(theta) z...

I solved my own problem. The reason is that I mixed up two versions of OpenSceneGraph, one is compiled with VS2012 without JPEG plugin and the other is compiled with VS2010 with JPEG plugin. The OSG compiled with VS2010 will not work under VS2012. Now I've found another OSG compiled...

I think your error in the 3D vs 2D surface colour is due to data normalisation in the surface colours. If you normalise the data passed to plot_surface facecolor with, facecolors=plt.cm.BrBG(data/data.max()) the results are closer to what you'd expect. Unless you have a strong reason for using imshow, you may...