At first I'm pretty new to routing in general and need to use it for my bachelor thesis so I am pretty time limited to learn it. If there is anymore Information I can give to find the Problem or if you just have a idea please inform me. Thanks!
I have a problem with the routing methods offered from the postgis extension pgrouting where the results are far away from being a shortest path.
I'm using a docker container containing postgis with the extension of pgrouting:
https://hub.docker.com/r/pgrouting/pgrouting/
My Graph is created with osm2po on a "Niedersachsen" map from Geofabrik.de.
I just followed the tutorials given from osm2po quickstart to create a graph and put it into my database.
My Table is build as:
CREATE TABLE hh_2po_4pgr(id integer, osm_id bigint, osm_name character varying, osm_meta character varying, osm_source_id bigint, osm_target_id bigint, clazz integer, flags integer, source integer, target integer, km double precision, kmh integer, cost double precision, reverse_cost double precision, x1 double precision, y1 double precision, x2 double precision, y2 double precision);
SELECT AddGeometryColumn('hh_2po_4pgr', 'geom_way', 4326, 'LINESTRING', 2);
The only thing i configured was setting my cost and reverse_cost to the distance. But without this change the issue stays the same. Also i did pgr_anaalyzeGraph which gave me back a OK which normally indicates that the graph should be configured fine.
When i now fire up a query like this:
SELECT * FROM pgr_astar('SELECT id, source, target, cost, x1, y1, x2, y2 FROM hh_2po_4pgr', 232516, 213104, FALSE, 2);
It gives me seemingly random routes that are either empty or way to long.
The Id's I'm testing are pretty near to each other and all connected through streets.
In this case it should be a route of a few hundret meters but I get a route with over 1000 Segments and almost 100km.
note: I also tryed other functions like pgr_djjkstra.
Solution:
SELECT source FROM hh_2po_4pgr ORDER BY geom_way <-> ST_SetSRID(ST_Point(:pointX, :pointY),4326);
SELECT target FROM hh_2po_4pgr ORDER BY geom_way <-> ST_SetSRID(ST_Point(:pointX, :pointY),4326);
Use these SELECT Statements to get the right Id's to use in your pgrouting function.
If anyone has issues on pgrouting pls pn me. If I look at my Stackoverflow messages I will try helping you. Seems like the documentations and tutorials are kind of irritating.
Related
I'm pretty new to Logic App so still learning my way around custom expressions. One thing I cannot seem to figure out is how to convert a FileTime value to a DateTime value.
FileTime value example: 133197984000000000
I don't have a desired output format as long as Logic App can understand that this is a DateTime value and can be able to run before/after date logic.
To achieve your requirement, I have converted the Windows file Time to Unix File Time then converted to File time by add them as seconds to a default date 1970-01-01T00:00:00Z. Here is the Official documentation that I followed. Below is the expression that worked for me.
addSeconds('1970-01-01T00:00:00Z', div(sub(133197984000000000,116444736000000000),10000000))
Results:
This isn't likely to float your boat but the Advanced Data Operations connector can do it for you.
The unfortunate piece of the puzzle is that (at this stage) it doesn't just work as is but be rest assured that this functionality is coming.
Meaning, you need to do some trickery if you want to use it to do what you want.
By this I mean, if you use the Xml to Json operation, you can use the built in functions that come with the conversion to do it for you.
This is an example of what I mean ...
You can see that I have constructed some XML that is then passed into the Data parameter. That XML contains your Windows file time value.
I have then setup the Map Object to then take that value and use the built in ado function FromWindowsFileTime to convert it to a date time value.
The Primary Loop at Element is the XPath query that will make the selection to return the relevant values to loop over.
The result is this ...
Disclaimer: I should point out, this is due to drop in preview sometime in the middle of Jan 2023.
They have another operation in development that will allow you to do this a lot easier but for now, this is your easier and cheapest option.
This kind of thing is also available in the Transform and Expert operations but that's the next tier level of pricing.
I'm not being able to understand how is the data type geography in SQL server...
For example I have the following data:
0xE6100000010CCEAACFD556484340B2F336363BCA21C0
what I know:
0x is prefix for hexadecimal
last 16 numbers are longitude: B2F336363BCA21C0 (double of decimal format)
16 numbers before the last 16 are latitude: CEAACFD556484340 (double of decimal format)
4 first numbers are SRID: E610 (hexadecimal for WGS84)
what I don't understand:
numbers from 5 to 12 : 0000010C
what is this?
From what I read this seems linked to WKB(Well Known Binary) or EWKB(Extended Well Known Binary) anyway i was not abble to find a definition for EWKB...
And for WKB this is supposed to be geometry type (4-byte integer) but the value doesn't match with the Geometry types codes (this example is for one point coordinate)
Can you help to understand this format?
The spatial types (geometry and geography) in SQL Server are implemented as CLR data types. As with any such data types, you get a binary representation when you query the value directly. Unfortunately, it's not (as far as I know) WKB but rather whatever format Microsoft decided was best for their implementation. For us (the users), we should work with the published interface of methods that have been published by MS (for instance the geography method reference). Which is to say that you should only try to decipher the MS binary representation if you're curious (and not for actually working with it).
That said, if you need/want to work with WKB, you can! For example, you can use the STGeomFromWKB() static method to create a geography instance from WKB that you provide and STAsBinary() can be called on a geography instance to return WKB to you.
The Format spec can be found here:
https://msdn.microsoft.com/en-us/library/ee320529(v=sql.105).aspx
As that page shows, it used to change very frequently, but has slowed down significantly over the past 2 years
I am currently needing to dig into the spec to serialize from JVM code into a bcp file so that I can use SQLServerBulkCopy rather than plain JDBC to upload data into tables (it is about 7x faster to write a bcp file than using JDBC), but this is proving to be more complicated than what I originally anticipated.
After testing with bcp, you can upload geographies by specifying an off row format ( varchar(max) ) and store the well known text, SQL server will see this and assume you wanted a geography based on the WKT it sees.
In my case converting to nvarchar resolved the issue.
Hope someone can help! We're encountering an issue with the Pervasive VAccess control whereby any time we save an item of type 'double' into a Pervasive database, the value saved is different to the one we want...
As an example we try to save 1.44 and it actually saves 1.44004035454
A slight difference but a difference nonetheless!
FYI the field defined in the DDF has decimal set to 0, i'm wondering if one course of action is to set this to e.g. 4? But thought i'd see if anyone can shed any light on it before we head down that path...
The underlying effect is nothing to do with pervasive, it's a simple floating point issue. You'll find the same in any system that uses single- or double-precision floating point, though some systems do automatic rounding to hide this from you.
See http://en.wikipedia.org/wiki/Floating_point
In the case of PostgreSQL and its derivatives, you can set extra_float_digits to control this rounding.
regress=> SET extra_float_digits = 3;
SET
regress=> SELECT FLOAT8 '1.44';
float8
---------------------
1.43999999999999995
(1 row)
regress=> SET extra_float_digits = 0;
SET
regress=> SELECT FLOAT8 '1.44';
float8
--------
1.44
(1 row)
It defaults to 0, but your client driver might be changing it. If you're using JDBC (which I'm guessing you are) then don't mess with this setting, the JDBC driver expects it to remain how the driver sets it and will get upset at you if you change it.
In general, if you want a human-readable formatted number you should be doing the rounding with round or to_char, or doing it client-side, instead. Note that there's no round(double precision, integer) function for reasons explained in answers to this question. So you'll probably want to_char, eg.
regress=> SELECT to_char(FLOAT8 '1.44', 'MI999999999D99');
to_char
---------------
1.44
(1 row)
(I wish PostgreSQL exposed a version of the cast from float8 to text that let you specify extra_float_digits on a per-call basis. That's often closer to what people really want. Guess I should add that if I get the time...)
I'm writing a .NET web app on top of a existing DB app (SQL server). The original developer stored money in float columns instead of money or decimal. I really want to use Decimal inside my POCOs (especially because I will be doing further operations on the values after I get them).
Unfortunately, I cannot touch the DB schema. Is there a way to still use Decimal in my POCOs and tell EF "It's ok, I know decimal and float don't get along. I don't like it either. Just do your best."
With no special configuration, I get this error:
The specified cast from a materialized 'System.Double' type to the 'System.Decimal' type is not valid.
I tried using modelBuilder.Entity(Of myPocoType).Property(Function(x) x.MoneyProperty).HasColumnType("float"), but that gets me this error:
Schema specified is not valid. Errors:
(195,6) : error 0063: Precision facet isn't allowed for properties of type float.
(195,6) : error 0063: Scale facet isn't allowed for properties of type float.
Something like this would work:
public class MyPocoType
{
public float FloatProp { get; set; }
[NotMapped]
public decimal DecimalProp
{
get
{
return (decimal)FloatProp;
}
set
{
FloatProp = (float)value;
}
}
}
EF will ignore the decimal one, but you can use it and it'll set the underlying float. You can add in your own logic for handling the loss of precision if there's anything special you want it to do, and you might need to catch the cases where the value is out of range of what it's being converted to (float has much larger range, but decimal has much greater precision).
It's not a perfect solution, but if you're stuck with floats then it's not going to get much better. A float is inherently a little fuzzy, so that's going to trip you up somewhere.
If you really want to get complex, you could look at keeping the decimal value internally and then using the various events that happen at save time (some logic in an overridden SaveChanges(), or catching the SavingChanges event perhaps) to convert to float only once, to cut down on the buildup of conversion errors.
When I browse the cube and pivot Sales by Month ,(for example), I get something like 12345.678901.
Is there a way to make it so that when a user browses they get values rounded up to nearest two decimal places, ie: 12345.68, instead?
Thanks,
-teddy
You can enter a format string in the properties for your measure or calculation and if your OLAP client supports it then the formatting will be used. e.g. for 1 decimal place you'd use something like "#,0.0;(#,0.0)". Excel supports format strings by default and you can configure Reporting Services to use them.
Also if you're dealing with money you should configure the measure to use the Currency data type. By default Analysis Services will use Double if the source data type in the database is Money. This can introduce rounding issues and is not as efficient as using Currency. See this article for more info: The many benefits of money data type. One side benefit of using Currency is you will never see more than 4 decimal places.
Either edit the display properties in the cube itself, so it always returns 2 decimal places whenever anyone edits the cube.
Or you can add in a format string when running MDX:
WITH MEMBER [Measures].[NewMeasure] AS '[Measures].[OldMeasure]', FORMAT_STRING='##0.00'
You can change format string property of your measure. There are two possible ways:
If measure is direct measure -
Go to measure's properties and update 'Format String'
If measure is calculated measure -
Go to Calculations and update 'Format String'