How to store a grid of values in a PostGIS database such that it can be contoured by GeoServer? - postgis

I'm planning to use GeoServer with a PostGIS database to provide contours over a Web Mapping Service.
I have a simple lat-long grid of values which I want to store in the database and have contoured. Whilst the GeoServer user manual implies that it is possible in this example...
https://docs.geoserver.org/stable/en/user/styling/sld/extensions/rendering-transform.html#contour-extraction
...it does not talk about what format the data should be in. Please can anyone suggest a suitable PostGIS database schema I can use that GeoServer will understand and be able to contour? Preferably one which will work with the GeoServer example from the link above.
Thanks for your help.

Since your data is already in a Java program, I would dive into GeoTools which is the underlying library that GeoServer uses to do the actual work.
Looking at ContourProcess what you actually need is a GridCoverage2D which is basic access to grid data values backed by a two-dimensional rendered image. Each band in an image is represented as a sample dimension.
So you'd want to take your data array and do something like this:
WritableRaster raster2 = RasterFactory.createBandedRaster(java.awt.image.DataBuffer.TYPE_INT, w,
h, 1, null);
for (int i = 0; i < w; i++) {//width...
for (int j = 0; j < h; j++) {
raster2.setSample(i, j, 0, myData[i*w+j]);
}
}
GridCoverageFactory gcf = new GridCoverageFactory();
// Here I'm using OSGB as I live in the UK you would be using something else
CoordinateReferenceSystem crs = CRS.decode("EPSG:27700");
// Position of Lower Left Corner of grid
int llx = 500000;
int lly = 105000;
// Pixel size in projection units
int resolution = 10;
ReferencedEnvelope referencedEnvelope = new ReferencedEnvelope(llx, llx + (w * resolution), lly, lly + (h * resolution),
crs);
GridCoverage2D gc = gcf.create("name", raster2, referencedEnvelope);
You can either then write it out as a GeoTiff or wrap all the above up into a new Process which returns contours.

So I had a play and can confirm that the code by #IanTurton works like a charm. Here is my final code, based on his with the main differences being that I'm using a lat/long coordinate reference system and I've included some code to write the raster out as a GeoTIFF...
import java.awt.image.WritableRaster;
import javax.imageio.ImageIO;
import javax.imageio.stream.ImageOutputStream;
import java.io.File;
import java.io.IOException;
import javax.media.jai.RasterFactory;
import org.geotools.coverage.grid.GridCoverage2D;
import org.geotools.coverage.grid.GridCoverageFactory;
import org.geotools.geometry.jts.ReferencedEnvelope;
import org.geotools.referencing.CRS;
import org.geotools.referencing.crs.DefaultGeographicCRS;
import org.geotools.gce.geotiff.GeoTiffFormat;
import org.geotools.gce.geotiff.GeoTiffWriter;
import org.opengis.parameter.ParameterValue;
import org.opengis.referencing.FactoryException;
import org.opengis.referencing.NoSuchAuthorityCodeException;
import org.opengis.referencing.crs.CoordinateReferenceSystem;
public class GridToGeoTiff {
public static void main(String[] args) throws NoSuchAuthorityCodeException, FactoryException, IllegalArgumentException, IndexOutOfBoundsException, IOException {
// Define the data grid
double[][] myGrid = {
{ 0.0, 0.2, 0.6, 0.3 },
{ 0.1, 1.1, 0.8, 0.7 },
{ 1.1, 2.6, 3.4, 0.3 },
{ 0.3, 0.9, 0.6, 0.1 }
};
int w = myGrid.length;
int h = myGrid[0].length;
// Position of Lower Left Corner of grid
double southBound = 51.5074; // degrees latitude
double westBound = 0.1278; // degrees latitude
double resolution = 0.001; // degrees lat/long
// Convert to a Raster
WritableRaster raster2 = RasterFactory.createBandedRaster(java.awt.image.DataBuffer.TYPE_INT, w, h, h, null);
for (int i = 0; i < w; i++) {
for (int j = 0; j < h; j++) {
raster2.setSample(i, j, 0, myGrid[j][i]);
}
}
// Create a GeoTools 2D grid referenced in lat/long
GridCoverageFactory gcf = new GridCoverageFactory();
CoordinateReferenceSystem crs = DefaultGeographicCRS.WGS84;
ReferencedEnvelope referencedEnvelope = new ReferencedEnvelope(
westBound, westBound + (w * resolution), southBound, southBound + (h * resolution), crs
);
GridCoverage2D gc = gcf.create("my-grid", raster2, referencedEnvelope);
// Write out to a GeoTIFF file
final File geotiff = new File("my-grid.tif");
final ImageOutputStream imageOutStream = ImageIO.createImageOutputStream(geotiff);
GeoTiffWriter writer = new GeoTiffWriter(imageOutStream);
final ParameterValue<Boolean> tfw = GeoTiffFormat.WRITE_TFW.createValue();
tfw.setValue(true);
writer.write(gc, null);
writer.dispose();
}
}
I'm using the following Maven dependencies...
org.geotools 22.2: gt-main, gt-coverage, gt-referencing, gt-geometry, gt-geotiff
org.opengis 2.2.0: geoapi
org.locationtech.jts 1.16.1: jts-core
javax.media.jai 1.1.3: com.springsource.javax.media.jai.core
...from the Boundless and OSGeo repositories.
Having used this code to create a GeoTIFF file, I could then use it to set up a store in GeoServer and then publish it. I adapted the SLD in GeoServer's contouring example (literally just changed the names and contouring thresholds) to create a style, which I then applied to the published GeoTIFF data, et voilĂ  contours on a map!
But... my data is not static and I will be producing many different grids, so this file based approach is going to be a little bit clunky. Therefore I'm going to look into GeoServer's ImageMosaic plugin as a way of getting the contours straight from the database. However, it seems that this is not a popular option, and might not be production ready (according to this post), so I may end up contouring the data myself and storing it as vectors after all. If anyone has further thoughts on this I'd love to hear them.
Thanks for all your help everyone!

Related

Positioning Array Content (Sprites)

I have Pictures with Numbers on it (I mean Sprites).
I got them on an Empty GameObject, I mean [SerializeField] and added through the script (C# Ofcourse), So the Objects are not really there they are being Generated when the Game begins.
So as you can see in the Code that I can set Row and Columns Amount and with Offset also distances in X and Y Axis. But I cannot re-position it. It seems that the first one being generated is locked to the middle of the project (the first one up-Left)So I tried to move the gizmo of the empty gameobject but the sprites are still on the spot even if I use the Inspector Instead. It seems that it would need to be positioned it in the script, But How?
Please give me enough Examples witch will work with Unity?
What I tried is to position it in Unity as I already mentioned with moving the Gizmo of the Gameobject and also in the Inspector It really seems that it can only be done on the script (I might be wrong but I tried everything).
public class Controll : MonoBehaviour
{
public const int gridRows = 6;
public const int gridCols = 6;
public const float offsetX = 0.65f;
public const float offsetY = 0.97f;
[SerializeField] private GameObject[] cardBack;
// Use this for initialization
void Start ()
{
for (int i = 0; i < gridRows; i++)
{
for (int j = 0; j < gridCols; j++)
{
Instantiate(cardBack[i], new Vector3(offsetY*j, offsetX* i *-1 , -0.1f), Quaternion.identity);
}
}
}
You are instantiating all objects into the Scene root level. They are in no way related to the GameObject which was originally responsible for the instantiation.
If you rather want them to be positioned relative to the spawning GameObject then use
var position = transform.position + new Vector3(offsetY * j, offsetX * i * -1, -0.1f);
Instantiate(cardBack[i], position, Quaternion.Identity, transform);
in order to instantiate them as child objects of the GameObject this Controll script is attched to.
Now if you translate, rotate or scale that parent object all instantiated objects are transformed along with it.

Precisely locating glyph text in WPF

I am writing a chemical molecule editor for Windows. As it has to be used in a Word Add-In I am restricted to using WPF for rendering structures. This is working quite well, apart from one tiny niggling point.
I use GlyphRuns for rendering atom labels and they are always displaced slightly to the right. If you look on the screenshot you can see there is a leading whitespace, especially with the H2N, and Hg atom labels. Why? The white background is what you get when you get the outline geometry of the glyph run.
The GlyphRun class is so badly documented that I cannot see which of the properties to amend to precisely locate the text where I want it. So any suggestions to try would be welcome.
UPDATE: I've been asked to provide a sample. The code is complex, but not gratuitously so, so I'm cutting it down to focus on the essentials:
public void MeasureAtCenter(Point center)
{
GlyphInfo = GlyphUtils.GetGlyphsAndInfo(Text, PixelsPerDip, out GlyphRun groupGlyphRun, center, _glyphTypeface, TypeSize);
//compensate the main offset vector for any descenders
Vector mainOffset = GlyphUtils.GetOffsetVector(groupGlyphRun, AtomShape.SymbolSize) + new Vector(0.0, -MaxBaselineOffset) + new Vector(-FirstBearing(groupGlyphRun), 0.0);
TextRun = groupGlyphRun;
TextMetrics = new AtomTextMetrics
{
BoundingBox = groupGlyphRun.GetBoundingBox(center + mainOffset),
Geocenter = center,
TotalBoundingBox = groupGlyphRun.GetBoundingBox(center + mainOffset),
OffsetVector = mainOffset
};
}
public static GlyphInfo GetGlyphs(string symbolText, GlyphTypeface glyphTypeFace, double size)
{
ushort[] glyphIndexes = new ushort[symbolText.Length];
double[] advanceWidths = new double[symbolText.Length];
double[] uprightBaselineOffsets = new double[symbolText.Length];
double totalWidth = 0;
for (int n = 0; n < symbolText.Length; n++)
{
ushort glyphIndex = glyphTypeFace.CharacterToGlyphMap[symbolText[n]];
glyphIndexes[n] = glyphIndex;
double width = glyphTypeFace.AdvanceWidths[glyphIndex] * size;
advanceWidths[n] = width;
double ubo = glyphTypeFace.DistancesFromHorizontalBaselineToBlackBoxBottom[glyphIndex] * size;
uprightBaselineOffsets[n] = ubo;
totalWidth += width;
}
return new GlyphInfo { AdvanceWidths = advanceWidths, Indexes = glyphIndexes, Width = totalWidth, UprightBaselineOffsets = uprightBaselineOffsets };
}
public static GlyphUtils.GlyphInfo GetGlyphsAndInfo(string symbolText, float pixelsPerDip, out GlyphRun hydrogenGlyphRun, Point point, GlyphTypeface glyphTypeFace, double symbolSize)
{
//measure the H atom first
var glyphInfo = GlyphUtils.GetGlyphs(symbolText, glyphTypeFace, symbolSize);
hydrogenGlyphRun = GlyphUtils.GetGlyphRun(glyphInfo, glyphTypeFace,
symbolSize, pixelsPerDip, point);
//work out exactly how much we should offset from the center to get to the bottom left
return glyphInfo;
}
public static Vector GetOffsetVector(GlyphRun glyphRun, double symbolSize)
{
Rect rect = glyphRun.ComputeInkBoundingBox();
//Vector offset = (rect.BottomLeft - rect.TopRight) / 2;
Vector offset = new Vector(-rect.Width / 2, glyphRun.GlyphTypeface.CapsHeight * symbolSize / 2);
return offset;
}
Indeed the GlyphRun class is a lot of work to use. I would suggest working with FormattedText objects instead. If there are performance issues, you can consider converting the FormattedText to Geometry once and reusing that. The MSDN docs provide a comparison of the different approaches.

Aligning a card array in-game

I'm creating a tcg (trading card game) and I would like to know how can I change the layout of the cards while playing. I mean that the cards will be spread in line aligned to the center of the screen both vertically and horizontaly, on a canvas, and when I draw/dismiss a card I would like the cards to fill in the space and align again in game. How can I do that? any ideas? I thought of a solution about when your turn begins (Start from the center of the screen then step back the length of a step X the number of cards / 2 and then spawn the cards one after another), but I can't figure out how to change the alignment of cards when you dismiss one of them without loading them all again...
Image for example
Using the same method you used for the initial position you should be able to get the new position. Now you have two positions for each card: oldPos and newPos.
Your cards are already instantiated. Their positions are stored in Transform.position. Your goal is to move from oldPos to newPos. The simplest way would be:
myCard.transform.position = newPos;
This will instantly move your cards to their new positions. However, it's not common to teleport your objects because it does not often present good feelings to users. A better solution is to smoothly move the object from a position to another.
To do this, you can move around an existing object by transform.Translate(new Vector3());, where the Vector3 will decide its moving speed. The method Translate() is doing position += movementDirection * movementAmount as you would've expected.
Moving any object over frames is called Animation. There are techniques for animation to make movements look more better (look faster than it really is, or look natural). One common method from mathematics is called linear interpolation, or lerp. Using lerp, you can easily compute intermediate points between two end-positions, and it will look natural and nice if you put your objects along the points you calculated. I believe this is what you are looking for.
========
Edit:
Here's an example of how this could be achieved. Note that Card is moving by the same amount of distance per frame in this example. Using lerp (ease-in, ease-out, etc), you could make this animation even better.
Another point I would like you to note is that I'm doing if (Vector2.Distance(nextPosition, transform.position) < 10), not if(oldPosition.equals(newPosition)). The reason is that equals() is not safe to compare floats because they are often stored as 0.4999999 and 0.50001 instead of 0.5 and 0.5. So the best way of checking floats is to test if they are "Close Enough" to each other.
Finally, you could improve the following code may improve in MANY DIFFERNET WAYS. For instnace:
Destroy() and Instantiate() is very slow operations and you
should use Object Pooling because you know you will perform these
operations constantly.
The movement of Card could be improved by better animation technique like lerp.
There may be other ways of storing List<Card> Cards
OnCardClick() is using FindObjectOfType<CardSpawner>().OnCardDeleted(this) and this requires Card to know about CardSpawner. This is called Tight Coupling, which is known as evil. There are a lot of discussions you can find why this is bad. A recommended solution would be to use event (better UnityEvent in Unity3d).
CardSpawner.cs
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class CardSpawner : MonoBehaviour
{
[SerializeField] GameObject CardParent;
[SerializeField] GameObject CardPrefab;
Vector2 DefaultSpawnPosition = new Vector2(Screen.width / 2f, Screen.height / 10f);
List<Card> Cards = new List<Card>();
public void OnClickButton()
{
SpawnNewCard();
AssignNewPositions();
AnimateCards();
}
public void OnCardDeleted(Card removedCard)
{
Cards.Remove(removedCard);
AssignNewPositions();
AnimateCards();
}
void SpawnNewCard()
{
GameObject newCard = (GameObject)Instantiate(CardPrefab, DefaultSpawnPosition, new Quaternion(), CardParent.GetComponent<Transform>());
Cards.Add(newCard.GetComponent<Card>());
}
void AssignNewPositions()
{
int n = Cards.Count;
float widthPerCard = 100;
float widthEmptySpaceBetweenCards = widthPerCard * .2f;
float totalWidthAllCards = (widthPerCard * n) + (widthEmptySpaceBetweenCards * (n-1));
float halfWidthAllCards = totalWidthAllCards / 2f;
float centreX = Screen.width / 2f;
float leftX = centreX - halfWidthAllCards;
for (int i = 0; i < n; i++)
{
if (i == 0)
Cards[i].nextPosition = new Vector2(leftX + widthPerCard / 2f, Screen.height / 2f);
else
Cards[i].nextPosition = new Vector2(leftX + widthPerCard / 2f + ((widthPerCard + widthEmptySpaceBetweenCards) * i), Screen.height / 2f);
}
}
void AnimateCards()
{
foreach (Card card in Cards)
card.StartMoving();
}
}
Card.cs
using System;
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class Card : MonoBehaviour
{
public Vector2 oldPosition;
public Vector2 nextPosition;
bool IsMoving;
void Update ()
{
if (IsMoving)
{
int steps = 10;
Vector2 delta = (nextPosition - oldPosition) / steps;
transform.Translate(delta);
if (Vector2.Distance(nextPosition, transform.position) < 10)
IsMoving = false;
}
}
public void StartMoving()
{
IsMoving = true;
oldPosition = transform.position;
}
public void OnCardClick()
{
UnityEngine.Object.Destroy(this.gameObject);
Debug.Log("AfterDestroy");
FindObjectOfType<CardSpawner>().OnCardDeleted(this);
}
}

How would I map a camera image to create a live funhouse mirror using opencv?

Using Opencv and Linux I would like to create a fun-house mirror effect, short and squat, tall and thin effect using a live webcamera. My daughter loves those things and I would like to create one using a camera. I am not quite sure about the transforms necessary for these effects. Any help would be appreciated. I have much of the framework running, live video playing and such, just not the transforms.
thanx
I think that you need to use 'radial' transforms and 'pin cushion' which is inverse radial.
In order to braker the symmetry of the transforms you can strech the image before and after:
Suppose your image is 300x300
pixels.
Strech it to 300x600 or
600x300 using cvResize()
Apply transform: radial, pincushion or
sinusoidal
Strech back to 300x300
I never used radial or sinusoidal transforms in openCV so I dont have a piece of code to attach. But you can use cvUndistort2() and see if it is OK.
Create window with trackbars with range 0..100. Each trackbar controls parameter of distortion:
static IplImage* srcImage;
static IplImage* dstImage;
static double _camera[9];
static double _dist4Coeff[4]; // This is the transformation matrix
static int _r = 50; // Radial transform. 50 in range 0..100
static int _tX = 50; // Tangetial coef in X directio
static int _tY = 50; // Tangetial coef in Y directio
static int allRange = 50;
// Open windows
cvNamedWindow(winName, 1);
// Add track bars.
cvShowImage(winName, srcImage );
cvCreateTrackbar("Radial", winName, &_r , 2*allRange, callBackFun);
cvCreateTrackbar("Tang X", winName, &_tX , 2*allRange, callBackFun);
cvCreateTrackbar("Tang Y", winName, &_tY , 2*allRange, callBackFun);
callBackFun(0);
// The distortion call back
void callBackFun(int arg){
CvMat intrCamParamsMat = cvMat( 3, 3, CV_64F, _camera );
CvMat dist4Coeff = cvMat( 1, 4, CV_64F, _dist4Coeff );
// Build distortion coefficients matrix.
dist4Coeff.data.db[0] = (_r-allRange*1.0)/allRange*1.0;
dist4Coeff.data.db[1] = (_r-allRange*1.0)/allRange*1.0;
dist4Coeff.data.db[2] = (_tY-allRange*1.0)/allRange*1.0;
dist4Coeff.data.db[3] = (_tX-allRange*1.0)/allRange*1.0;
// Build intrinsic camera parameters matrix.
intrCamParamsMat.data.db[0] = 587.1769751432448200/2.0;
intrCamParamsMat.data.db[1] = 0.;
intrCamParamsMat.data.db[2] = 319.5000000000000000/2.0+0;
intrCamParamsMat.data.db[3] = 0.;
intrCamParamsMat.data.db[4] = 591.3189722549362800/2.0;
intrCamParamsMat.data.db[5] = 239.5000000000000000/2.0+0;
intrCamParamsMat.data.db[6] = 0.;
intrCamParamsMat.data.db[7] = 0.;
intrCamParamsMat.data.db[8] = 1.;
// Apply transformation
cvUndistort2( srcImage, dstImage, &intrCamParamsMat, &dist4Coeff );
cvShowImage( winName, dstImage );
}

How do I convert TLE information to longitude latitude and altitude to display on a Bing Map?

I am using the OrbitTools library to develop a satellite tracking system using the Bing Maps Silverlight control similar to http://karhukoti.com.
I am not knowledgeable in this domain and lack a lot of the information related to satellite tracking but have started teaching myself as this particular project was chosen by my supervisor as a graduation project.
However, i faced numerous difficulties, a major one is how to convert Two Line Elements (TLE) information to longitude latitude and altitude to display the satellite and satellite path on the map.
I tried the following C# code:
protected void DisplaySatellitePath(List<Eci> Pos)
{
MapLayer myRouteLayer = new MapLayer();
myMap.Children.Add(myRouteLayer);
foreach (Eci e in Pos)
{
CoordGeo coordinates = e.toGeo();
Ellipse point = new Ellipse();
point.Width = 10;
point.Height = 10;
point.Fill = new SolidColorBrush(Colors.Orange);
point.Opacity = 0.65;
//Location location = new Location(e.Position.X, e.Position.X);
Location location = new Location(coordinates.Latitude, coordinates.Longitude);
MapLayer.SetPosition(point, location);
MapLayer.SetPositionOrigin(point, PositionOrigin.Center);
myRouteLayer.Children.Add(point);
}
}
and also tried
protected void DisplaySatellitePathSecondGo(List<Eci> Pos)
{
MapLayer myRouteLayer = new MapLayer();
myMap.Children.Add(myRouteLayer);
foreach (Eci e in Pos)
{
Ellipse point = new Ellipse();
point.Width = 10;
point.Height = 10;
point.Fill = new SolidColorBrush(Colors.Yellow);
point.Opacity = 0.65;
Site siteEquator = new Site(e.Position.X, e.Position.Y, e.Position.Z);
Location location = new Location(siteEquator.Latitude, siteEquator.Longitude);
MapLayer.SetPosition(point, location);
MapLayer.SetPositionOrigin(point, PositionOrigin.Center);
myRouteLayer.Children.Add(point);
}
}
Can you please tell me what i'm doing wrong here? I searched the net for examples or documention about OrbitTools but with no luck.
I really hope that someone using this library could help me or suggest a better .NET library.
Thank you very much.
Is this still something you are struggling with? I noticed when I pulled down the code that they have a demo that they provide along with the library. In it they show the following method which I'm sure you must have looked at:
static void PrintPosVel(Tle tle)
{
Orbit orbit = new Orbit(tle);
ArrayList Pos = new ArrayList();
// Calculate position, velocity
// mpe = "minutes past epoch"
for (int mpe = 0; mpe <= (360 * 4); mpe += 360)
{
// Get the position of the satellite at time "mpe".
// The coordinates are placed into the variable "eci".
Eci eci = orbit.getPosition(mpe);
// Push the coordinates object onto the end of the array
Pos.Add(eci);
}
// Print TLE data
Console.Write("{0}\n", tle.Name);
Console.Write("{0}\n", tle.Line1);
Console.Write("{0}\n", tle.Line2);
// Header
Console.Write("\n TSINCE X Y Z\n\n");
// Iterate over each of the ECI position objects pushed onto the
// position vector, above, printing the ECI position information
// as we go.
for (int i = 0; i < Pos.Count; i++)
{
Eci e = Pos[i] as Eci;
Console.Write("{0,4}.00 {1,16:f8} {2,16:f8} {3,16:f8}\n",
i * 360,
e.Position.X,
e.Position.Y,
e.Position.Z);
}
Console.Write("\n XDOT YDOT ZDOT\n\n");
// Iterate over each of the ECI position objects in the position
// vector again, but this time print the velocity information.
for (int i = 0; i < Pos.Count; i++)
{
Eci e = Pos[i] as Eci;
Console.Write("{0,24:f8} {1,16:f8} {2,16:f8}\n",
e.Velocity.X,
e.Velocity.Y,
e.Velocity.Z);
}
}
In this it seems that they are making the conversion that you are looking for. Am I missing something as to what the problem is that you are actually having?

Resources