How can I optimize lattice constants (celldm) in Quantum espresso? - quantum-computing

How can I optimize lattice constants (celldm) in Quantum espresso? I want to choose the best celldm for my unit cell of two carbon atoms.

The keyword needed for optimising lattice constants is
calculation='vc-relax'
which should be put inside the &control block. vc stands for variable cell.

Related

Numerically robust predicate for determining common sphere

I would like to know if there is any library/package which provides numerically robust predicate for testing whether n-points(in my case n=5) lie on a common sphere?
I want to perform this test in context of Delaunay tetrahedralization. I have seen packages in CGAL but cannot find any explicit function for this problem.
One approach that I can think of is using CGAL CGAL::Sphere_d class to initialize a sphere using 4 points and for all remaining points I will use has_on_boundary() function to test if all points lie on common sphere, but I am not sure if it can work in general, I mean is it provably correct to do common sphere test this way.
You can use either CGAL::side_of_bounded_sphere() or CGAL::side_of_oriented_sphere() with a kernel having exact predicates like CGAL::Exact_predicates_inexact_constructions_kernel.
You can use it like this:
#include <CGAL/Exact_predicates_inexact_constructions_kernel.h>
typedef CGAL::Exact_predicates_inexact_constructions_kernel K;
typedef K::Point_3 Point_3;
int main()
{
Point_3 p1(XX,XX,XX), p2(XX,XX,XX), p3(XX,XX,XX), p4(XX,XX,XX), p5(XX,XX,XX);
if (CGAL::side_of_bounded_sphere(p1,p2,p3,p4,p5) == CGAL::ON_BOUNDARY)
//the points are co-spherical
}
There are robust 2D and 3D primitives for "incircle check" in Triangle triangulation package by J. Schewchuk
What about converting all your points to integer coordinates, using suitable scaling.
The InSphere test is an expression of the fifth degree wrt the coordinates. Roughly speaking, using 64 bits arithmetic you can test spheres up to diameter 6,000 or so. With extended precision, 128 bits, you cover up to 44,000,000 (only +, -, * required).
The benefit is that you avoid any risk of algorithm failures arising from predicate incoherence.

Initial Hidden Markov Model for the Baum Welch algorithm

While trying to make a program for hidden markov models, I did the simplest assumption for the initial HMM of the Baum-Welch algorithm : put everything as a uniform distribution. That is,
A[i][j] = 1/statenumber;
B[i][j] = 1/observationnumber;
P[i] = 1/statenumber;
up to a logarithm to avoid underflowing. It has the benefit of not requiring to check for normalization.
But so far, I've run into the algorithm not actually doing much. The emission matrix changes at the first iteration, but not after that, and the transition matrix and initialization vector do not evolve at all. It seems to be that the gamma matrix does not change at all.
At first I thought it was my algorithm not working out too well, but after trying it on some other HMM libraries, I seem get the same type of results.
Is it impossible to converge to the correct HMM using such an initialization, and what is the ideal method to initialize those arrays?
The Baum Welch algorithm won't work with a uniform initial distribution -- the updates will be degenerate. Try to randomize it instead.

3D interpolation methods in C (or Fortran), and comparison to Shepard's Method

I would like to interpolate a 3D scalar function f(x, y, z). I have coded up a 3D linear interpolation algorithm (http://en.wikipedia.org/wiki/Trilinear_interpolation). This was not so bad.
However, I would like something more sophisticated, e.g. 3D cubic splines. Are there any open source, easy-to-use, publicly available code for interpolating a 3D scalar? I would prefer to use C, but Fortran would be OK as well. I would like to stay away from Matlab.
I have seen similar questions asked here:
Interpolating a scalar field in a 3D space
and
What are some good libraries for 3D interpolation?
The second one was OK with Matlab, which I am not.
As for the first one, the main suggestion was Shepard's method. I am curious how accurate Shepard's method is. For instance, in the case of a uniform grid, one can apply Shepard's method only to nearby grid points, and in that case does it tend to be more accurate than linear interpolation or cubic splines? I imagine not, but wasn't 100% sure, and if in fact it is not better, then I would prefer to find code using something like splines if any such codes are available.
Take a look at Geometric Tools for Interpolation:
templated C++ for tricubic, uniform B-splines, and much more.
(einspline, a C library for B-splines in 1d 2d 3d,
seems to be dormant in 2013; the author doesn't answer emails.
Also, it's C; C++ templates would reduce code bloat for interpolating
floats, colors, vecs ...)
I haven't used either of these.
On Inverse distance weighting
a.k.a. Shepard's method, you can take any number of neighbors: in 3d, 2^3 or 3^3 or 4^3 ...
A general problem is "sagging" — see the plot in the link.
"Accuracy" of any interpolation method is really hard to measure: what's "golden",
for what class of data / what noise ?
And you have two measures, error at the data and smoothness, to trade off
— for
photo enlargement
three:
aliasing, blurring and edge halos.
There's some theory on spline interpolation of band-limited functions, but afaik none at all for IDW.
Added:
What about the
bullseye effect ?
IDW is a terrible choice in almost every case.
It assumes that all of your input data points are local minimums or maximums!
Well, IDW can have peaks above nearby data points, if there are high peaks far away.
For example in 1d,
IDW( [0 0] [1 0] [2 y] ) = y/7 at x = 1/2.
But IDW weights ~ 1 / distance may be too spiky, fall off too fast, for some tasks.
Interpolation methods and kernels have to be chosen to fit specific data and noise — an art.
The bspline-fortran library does 2d-6d b-spline interpolation for data on a regular grid. It is written in modern Fortran (there is a basic subroutine interface and also an object-oriented interface).
vspline is a FOSS C++ template library for b-spline processing. It's dimension-agnostic, so you can use it for 3D data. It's focus is on efficiently processing large raster data sets with multithreaded SIMD code. If you're concerned about precision, it can use long doubles for calculations and has extremely precise precomputed constants for maximum fidelity.

Passing varying array from vertex to geometry shader on Mac

I'd like to be able to pass an arbitrary number of varying values per vertex from the vertex shader to the geometry shader. I know that OpenGL has no dynamic arrays, so the number should be specified at compile time. The whole thing should run on an Apple MacBook with a NVIDIA GeForce 9400M graphics card and a driver that only offers OpenGL 2.1, along with some extensions.
The problem here seems to be that the geometry shader takes its input in the form or an array with one element per vertex. As far as I can tell, there are no arrays of arrays available in my setup, and no arrays of interface blocks containing arrays either. So far, the best solution I could come up with is specifying a number of variables to pass this information, extracted from an array in the vertex shader and turned back into an array with a certain stride length in the geometry shader. That way, access to the values can still be performed using computed indices.
Is there a better, more elegant way?
From EXT_geometry_shader4 specification:
User-defined varying variables can be declared as arrays in the
vertex shader. This means that those, on input to the geometry shader,
must be declared as two-dimensional arrays. See sections 4.3.6 and 7.6 of
the OpenGL Shading Language Specification for more information.
For example, in the vertex shader, you may specify
varying vec2 value[2];
and in the geometry shader, this becomes a two-dimensional array, e.g. with triangles as input primitives
varying in vec2 value[3][2];
Note the counterintuitive order of array indices! Also beware that the array dimensions must be specified explicitly, using an integer constant. Using a non-constant integer variable or gl_VerticesIn yields a compiler error. Both remarks have been tested on the very MacBook Pro model mentioned in the question.
There are reasons why core OpenGL's geometry shaders don't work the way EXT_geometry_shader4 does. This is one of them. EXT_geometry_shader4 doesn't allow arrays of inputs because that would mean allowing arrays of arrays of values. And GLSL can't handle that (well, until recently, but that's only 2 months old).
Interface blocks can have arrays in them. Your problem is that GLSL 1.20 doesn't have interface blocks.
There's not much you can do besides use different variables and manually unroll all your loops. You could write a function that takes an integer value and conditionally returns one of the different values that correspond to that index, but that's about the best you're going to get with old-school GLSL.

Can you use cvCalibrateCamera2 to find the extrinsic relationship between two cameras?

I'm wondering what the possibility of using cvCalibrateCamera2 to find the rotation and translation between 2 cameras is.
Take, for example, the scenario where you have just done a chessboard calibration where the chessboard was viewable to both image planes. If I then pass in those points from one camera as the object_points and the points from the other camera as the image_points would this give me the rotation and translation between the two views?
Also, this spits out several rotation matrices and translation vectors. Which one would I use?
I think StereoCalibrate is probably a better match for your needs. You'll need to provide a set of 2D correspondences as well as both cameras' intrinsics which you already know from the chessboard calibration.
Probably best to use FindExtrinsicCameraParams2
http://opencv.willowgarage.com/documentation/camera_calibration_and_3d_reconstruction.html#findextrinsiccameraparams2

Resources