Wednesday, December 14, 2022

Maya plugin for intersection

I spent some time lately trying to write a Maya plugin that would find the intersection of two surfaces and help then the user "fix" that intersection, i.e., deform at least one of the surfaces so that they intersect no more. Here is the video of what I managed to do so far.


It is by far not complete, but I decided to call it a day for now until I feel like getting back to it. It is also very simple - right now it only works with oriented surfaces, as I'm using the triangle normals to figure the interior and the exterior of the intersection. And this is pretty much what it does, it gives you the intersection points and the list of interior points on each mesh. Then you can use either of these lists to select the intersecting points on one of the meshes and then do something with them, like move them or shrink wrap to the other surface (which is what I do in the video above).

As for Maya, it proved to be much harder than I thought. Respect to all the people that are working with the Maya API, both in C++ and MEL (or Python). This is some pretty hard stuff to learn. Therefore, my solution isn't final yet. All I managed to do is create a locator node with some custom drawing (based on an example) and just plot the intersection points. The outputs of my node are: the number of intersection points, the list of interior points for mesh 1 and 2, respectively, and each of the two mesh patches involved in the intersection (the intersecting triangles and the interior ones). Then I wrote a MEL procedure to take the interior point list output (an int array) and use it to make a vertex selection. In my opinion, this is pretty useful, even if I'm replicating the work, as the boolean operation in Maya doesn't offer this option. Then you can do what you want with it. 

For the spheres case, a shrink wrap using vertex normals seemed to do the job (after some tweaking of the parameters). what the shrink wrap does is choose a direction (or one per vertex) and shoot rays from each vertex (raycast) until they intersect the other surface - those will be the target positions.  But moving the vertices along the normals also works, as well as using a bit of soft selection (but not what I expected, really). What the shrink wrap does is choose a direction (or one per vertex) and shoot rays from each vertex (raycast) until they intersect the other surface - those will be the target positions. Of course, there are a million ways this could go wrong for more complex scenarios, e.g., triangle inversions and self intersections. I wrote my own shrink-wrap-like deformer to better control the process. The key is to only advance by incremental steps before self-intersections occur; and even if it does occur it can be reversed by some smoothing operation. As you can see in the example below in some cases you need to first smooth (a lot) the mesh selection before starting the shrink wrap deformation.



I need to play a bit more with the deformation direction field: average it somehow, use rays coming back from the other mesh, etc. But I still think vertex normals are essential for the direction of deformation, as they are also the gradient of the volume of intersection (which we would like to minimize down to zero).

Wednesday, April 20, 2022

Projective constraints

 This post just needed to be done, to mark again something that was known for some time and that uses once again the concept of projection. I am calling "projective constraints" what Baraff and Witkin called conjugate gradient (CG) filters in their break of the century seminal paper on cloth. I took the name from the SOFA framework documentation where it appears in a slightly different (but simpler) form, sign that this concept was previously known (which is almost always the case). In fact, a very similar form of CG appears in the book by Nocedal and Wright on numerical optimization.

I will try to be brief and acknowledge my main source as being the excellent tutorial on deformable body simulation from Pixar written by David Eberly and Theodore Kim. They are the ones to extract an important idea from a 2015 paper on multigrid cloth simulation by Tamstorf et al. And that idea is that you don't necessarily need CG to filter (or project) constraints, but you can do it with any linear system solver of choice as long you transform your system into the right space. This was an idea that was explored before by Ascher, Boxermann and Ye, but they all focused on the idea of a reduced system (of a smaller and even varying size). The pre-filtered preconditioned CG (PPCG) method, as it was later called, got one thing right: the correct form of the transformed system that was well-conditioned and works great in practice. In retrospective this looks moderately easy to derive, but somehow no one managed to derive it correctly before.

Hopefully, I will not mess up equations too much and add to the confusion, but here are the main lines of reasoning. You start with a linear system $\mathbf{Ax}=\mathbf{b}$. Then, you have a projection matrix $\mathbf{S}$ that takes you from the full configuration space to a smaller one where motion is restricted by some constraints. The letter $\mathbf{S}$ is chosen historically and also to differentiate from its orthogonal complement $\mathbf{P} = \mathbf{I} - \mathbf{S}$ which takes you onto the constraint space as was described on this blog before. Keep in mind also that orthogonal projection matrices are symmetric.

By the rules in which matrices and vectors transform under a change of basis, we can move our linear system into the "free" or "unconstrained" space: $\mathbf{SASx}=\mathbf{Sb}$. Or at least it would seem so. The more correct form is the one from the PPCG paper:

$(\mathbf{SAS} + \mathbf{I} - \mathbf{S}) \mathbf{y}=\mathbf{S}(\mathbf{b} - \mathbf{Az})$.

We assumed $\mathbf{z}$ is a prescribed solution for the constrained dimensions of the system. This means the final solution is a mixture between these and the free degrees of freedom: $\mathbf{x} = \mathbf{y} + \mathbf{z}$. Hence, $\mathbf{y} = \mathbf{x} - \mathbf{z}$. You need to be a bit extra careful to project $\mathbf{z}$ by $\mathbf{I} - \mathbf{S}$ whenever you are adding it to $\mathbf{x}$ so that you don't add contributions from the wrong dimensions, but if we consider it from the start as being in the right space we shouldn't worry about it. Finally, we are solving for $\mathbf{y}$ which is in the same space as $\mathbf{x}$ but it simply doesn't move in the constrained directions. We took care of it not happening by modifying the system matrix. Moreover, the system is perfectly solvable as the $\mathbf{I} - \mathbf{S}$ term guarantees no rank deficiency (and one can use a direct solver too).

I admit I don't fully grasp every little reasoning and subtlety in the derivation. And it's always easy to get yourself confused between one projection matrix and its complement. I swapped them by mistake many times and it takes a bit of practice and study of the original papers to get it right, but it's doable. 

The only thing that I want to draw attention to is this remark from the PPCG paper: "However, the above method easily generalizes to any kind of equality constraint, including the face-vertex and edge-edge constraints used in other collision response algorithms." Indeed, this is a step beyond the original CG filters, but it begs the question how to do it, as it's not directly obvious. And this is where the constraints projection matrix comes in handy and the relation $\mathbf{S} = \mathbf{I} - \mathbf{P}$. Now, in our previous discussions and the constrained dynamics literature, this matrix involves the inversion of the Delassus operator (or Schur complement matrix). But, here, we can consider the metric of the configuration space as being the identity, as all the authors above have considered, and we get a simpler form: $\mathbf{P} = \mathbf{J}^T(\mathbf{JJ}^T)^{-1}\mathbf{J}$, where $\mathbf{J}$ is the Jacobian of the constraint functions.

And immediately we notice that we may run into problems if $\mathbf{JJ}^T$ is rank deficient (i.e., the constraints are redundant). But this can be solved by performing a QR or SVD decomposition of $\mathbf{J}$ in order to compute the required inverse. Moreover, in my brief experiments I found that taking only some of the first singular values is enough to make it work. There is still much work to be done here, but it shows in principle that this method is feasible in practice. And if the (approximate) factorization  of $\mathbf{J}$ is done fast enough it can outcompete the Gauss-Seidel solver for the Schur complement system usually employed for constraints. (I think there is a paper by Jernej Barbic that did it some time ago too.) But I'm not sure if in the end it's really worth it (in terms of speed or solving inequality constraints). 

All I care about is the beauty of the connections. And I'm just glad that I know now how to better understand CG filters and how to (potentially) extend them to arbitrary constraints. (The only paper that I know of that goes on a similar path is "An Implicit Frictional Contact Solver for Adaptive Cloth Simulation" by Li et al.) Who knows? Maybe in the future we will find out that solving constraints this way is more robust, and cures all the problems of dual relaxation solvers like slow convergence to zero constraint error or large mass ratios. 

But should we use the proper mass matrix in the projection matrix? And how do we find the prescribed $\mathbf{z}$ values? Isn't that a full projection step that we do to solve constraints? And how are the free degrees of freedom interacting with the constraints? (It seems we are freezing the constraint solutions when in fact there should be an equilibrium between them and the rest of the system.) I think these are question that have to be answered in the wider scope of coupling elasto-dynamics and constraints (which may still offer some surprise new solvers over the next years). And projective constraints may play an important role in solving this problem, or at least an alternative.

Tuesday, January 25, 2022

Are constraint forces ever infinite?

This post was triggered by a section in the Lanczos book which talks about the physical meaning of the Lagrange multipliers. It is basically the argument that constraint forces are in fact the gradients of a very strong potential. And the usual picture is that this potential has a minimum exactly on the constraint curve and hence a zero force there just as expected. One such example of a potential is a typical inter-molecular force (e.g., van der Waals, Lennard-Jones, Morse) which has an equilibrium point at a certain distance from the molecule, corresponding to a stable point of the chemical bond. Molecules can vibrate around these points (like in a crystal) resulting in a thermal motion that is usually reduced away, leaving only the macroscopic degrees of freedom (DOF). The microscopic DOF oscillations have a reasonably sized amplitude and usually a very high frequency due to the stiff forces.

One first question that comes to mind when dealing with constrained Lagrangian mechanics is why are the Lagrange multipliers non-zero but the constraint is always satisfied $\phi(x)=0$? We know how to compute them algebraically, but where do they come from? And the standard answer is that they are the limiting case of a very strong potential as described above? But what does it mean very strong? Is the force becoming infinite or just the stiffness, i.e., the second derivative value at the equilibrium point?

A partial answer comes from the fact that if initial conditions are compatible with the constraint then the constraint forces have the duty to cancel out external and centrifugal forces that act perpendicular to the constraint direction (i.e., along the normal to the constraint manifold). This is in accord with d'Alembert's principle and the Newtonian view where there is always a reaction force keeping the particles in equilibrium along the constrained direction. This simplifies the problem to one of statics. But what is the potential we are trying to minimize?

Apparently, there should be three potentials summed up: one for inertial terms, another for external fields like gravity and finally the constraints. But if the latter is very strong then the other shouldn't matter anymore, should they? Also, the arguments should not be mixed: we either start from the axiomatic principles of d'Alembert and Lagrange and derive equations with reduced DOF due to constraints; or we start with the maximal DOF, consider very strong potentials and take the limiting case. And at the end ask ourselves if they are the same. 

Lanczos and Arnold argue that they are indeed the same. Other authors go a bit deeper and say that the convergence to the Lagrangian equations of motion is only in a weak or averaged sense. Some go as far as saying that the Lagrange equations on a constraint manifold are not even correct: either from a numerical standpoint (the integrator cannot achieve the same results as the reduced DOF solution) or theoretical (the convergence solution depends on many aspects of the real strong potential function).

For small displacements, it is enough to approximate the potential function around the equilibrium point. Thus, the the first order them (the force) is zero and only the second order term survives, resulting always in a quadratic potential. Hence, the ubiquitous analogy with a very strong linear spring. (Of course, with some positive definiteness assumptions.) But what justifies us taking the small displacement limit? Is it the a priori assumption that a constraint should always be satisfied? Or is it a more practical assumption that somehow the physical system will never oscillate too far from the equilibrium point? That is, the initial conditions are either compatible with the constraint or very close to it. The reasoning is a bit circular, because we would never idealize a very strong interaction force with a constraint, unless all the conditions in practice were met for such an idealization (e.g., a rigid body). And this would involve in my view some kind of energy dissipation through which all the particles would fall into their places (condense) and stay put on the constraint manifold.

But there would still be a microscopic vibrating motion, that we can either ignore or not. In the latter case, in molecular dynamics simulations it amounts to thermodynamical properties. Usually, great effort is put into correctly simulating this highly oscillatory motion and preserving the energy. But in mechanical engineering, this motion is totally ignored. It is not clear though what is assumed about this motion? Does it still happen as a perpetual oscillation and the resulting motion is an averaged motion? Or that our numerical method hits only those points where the constraint is satisfied at either position or velocity level? This does happen in practice a lot. Or is this vibrational energy totally damped out?

The view that Lanczos takes is that the microscopic deviation always happen, it cannot be prevented. And that is why there is always a non-zero constraint force (magnitude), i.e., the Lagrange multiplier. So, at a microscopic level, the constraint is never fully satisfied, except at some sample points or on average. This would answer our first question. (This is also the view behind constraint regularization, soft constraints and mixed formulations.) But then there are many authors who claim that an ideal constraint exists only if infinite forces are allowed. So the second question is: are constraint forces bounded? Again, the Lagrangian formalism answers simply by giving us a formula which can never produce infinite values. But the limiting potential theory could, as the derivative of the potential can grow towards infinity.

In my view, the answer is simple. If the constraint force is just a spring force $f = -k \phi(x)$ then taking the limit $k \rightarrow \infty$ then the force would become infinite. But remember that we also considered the deviation $\phi$ to be very small, i.e. $\phi \rightarrow 0$. Or, more often, the compliance $\epsilon = 1 / k$ is used instead, and so we get the ratio of two very small numbers; they both tend towards zero, but their ratio is finite. (Although, this probably doesn't qualify as a proof.) But is the small deviation hypothesis a sensible one?

In practice with numerical methods we can always get configurations where the constraint is badly violated. And what about the real physical world? If we raise the temperature high enough, the bonds will vibrate wildly, not only fast, but with high amplitude. Of course, at some point they would break and the constraint idealization would not make sense anymore. So, even if $k$ is not small we can see visible macroscopic violations of the constraints. But would that matter in an averaged sense? (If we consider a rigid body as the stiff limit of an elastic body, impacts or rigidity itself can be seen as transient dynamic phenomena that dissipate inside the bodies. The energy gets transformed into microscopic vibrations or heat that flows to a cold sink.)

I don't know how these cases should be considered. All I know, is that for the problems at hand in multibody systems, the motion outside the constraint plane does not concern us and it can be either oscillatory or damped, it doesn't matter. Or does it (given all the numerical drift)? And this is why projection methods are often preferred to penalty methods, because the latter usually oscillate visibly (unless carefully tuned and damped). And in my view we should always use projection because it's the only method that can handle incompatible initial conditions or position and velocity drift. But what is the physical limiting phenomenon behind projection? I hope to answer this question in the future, for now it is only a discussion. I think that such a phenomenon should clearly contain damping or impulsive components, but the details are not yet clear.

To conclude, I think whenever you read about infinite forces you should take it with a grain of salt. It's probably that the stiffness of the potential is infinite, but the constraint force should be bounded. Maybe only if you are considering impulsive motion (non-smooth) you can consider the projection of the velocity as the result of an infinite force integrated over a small time (Dirac delta function). But when we are talking about a strong harmonic potential and constraint regularization, there cannot be such forces. The actual motion is in fact highly oscillatory and the constraint is only satisfied on average.