google maps latitude longitude length - maps

We currently use lat/long stored in our database to display world wide grocery stores with 4 digits to the right of the decimal place (e.g. 36.4488). We are in the process of updating all records to be more accurate on google maps. Just wondering if in this process should we extend the lat/long to 6 digits to the right of the decimal place. Our code would have to change to handle this and wonder if it is really worth the payoff or will 4 digits suffice. Also, noticed that when displaying marker with position: latlng seems to display in different place than with marker with position: point (where point is set by point = results[0].geometry.location;). Has any one seen this before? Thanks for any responses.

If you use 4 digits to the right of the decimal place, the precision of each geolocated point is around 11 meters. On the other hand, 6 digits gives a precision of 11 centimeters, so if you want the exact location of each store, you should use 6 digits instead of 4.
In response to the second question, if the lat/long of the marker is correct, you shouldn't have problems with it, so review the coordinates of the point.

Related

Limit number of positions after decimal point in MariaDB

I am currently making a small MariaDB database and ran into the following problem:
I want to save a floatingpoint number with only 2 poistions after the decimal point but everything before the decimal point should be unaffected.
For example: 1.11; 56789.12; 9999.00; 999999999999.01 etc.
I have done some research and this is what I am using right now:
CREATE TABLE mytable (
mynumber DOUBLE(10, 2)
)
The problem with this solution is that I also have to limit the number of positions before the decimal point, what I don't want to do.
So is there a possibility to limit the number of positions after the decimal point without affecting the positions before the decimal point or is there a "default number" I can use for the positions before the decimal point?
Don't use (m,n) with FLOAT or DOUBLE. It does nothing useful; it does cause an extra round.
DECIMAL(10,2) is possible; that will store numbers precisely (to 2 decimal places).
See also ROUND() and FORMAT() for controlling the rounding for specific values.
You had a mistake -- 999999999999.01 won't fit in DOUBLE(10,2), nor DECIMAL(10,2). It can handle only 8 (=10-2) digits to the left of the decimal point.
You can create a trigger that intercepts INSERT and UPDATE statements and truncates their value to 2 decimal places. Note, however, that due to how floating point numbers work at machine level, the actual number may be different.
Double precision numbers are accurate up to 14 significant figures, not a certain number of decimal points. Realistically, you need to detemine what is the biggest value you might ever want to store. Once you have done that, the DECIMAL type may be more appropriate for what you are trying to do.
See here for more details:
https://dev.mysql.com/doc/refman/8.0/en/precision-math-decimal-characteristics.html

How to round Double's to exact bit-representation?

I'm doing geography calculations, and ultimately end up with a latitude and longitude to store in a Geography::Point object.
Both latitude and longitude can have 7 digits at most (which also gives precision up to 11 mm, which is plenty).
The problem is: if the value of a field cannot be stored correctly in a Double, MS SQL rounds towards the nearest number that can, but does so by adding a bunch of digits.
=> e.g. 5.9395772 is stored as 5.9395771999999996
The problem this creates, is that [Position].ToString() then exceeds the maximum amount of characters is allowed for that column (and no, I can't increase that limit).
Since we're dealing with Latitude, Longitude, Altitude and Accuracy, there's space for exactly 11 characters for Latitude and Longitude each:
String.Format(CultureInfo.InvariantCulture, "{0:##0.0######}", num)
I've tried simply Math.Round()ing to 6 digits, but then other numbers (e.g. 6.098163 to 6.0981629999999996) get the same problem.
How do I Math.Round towards the nearest 7-digit valid bit representation?
EDIT/ADD
Public Function ToString_LatLon(ByVal num As Double) As String
num = Math.Round(num, 7, MidpointRounding.AwayFromZero)
Return String.Format(CultureInfo.InvariantCulture, "{0:##0.0######}", num)
End Function 'IN = 5.9395772, OUT = 5.9395772
The above code receives a Double and correctly returns the String representation. I've checked it, this is correct also for troubling numbers.
It's stored in SQL Server through the framework we use. I think the problem occurs when storing the value
When I retrieve the value, I get an error in VB, saying the value is wider than the framework allows (max of 50 characters).
If I run a query in SSMS, I find e.g. POINT (X.0981629999999996 XX.664725 NULL 15602.707) (51 characters, anonimized).
EDIT 2
I've done some more research and some calculations. It seems that the stored value 5.9395772 is converted to binary and returned as 5.9395771999999996, which is stored as a double inside the database (in a binary Geography::Point object, not to worry.) Convert the binary 0 10000000001 0111110000100010000010000110100010000100010011011101 back to decimal, and you get 5.93957719999999955717839839053340256214141845703125, but abbreviated at 16 decimals - whereas I would like it abbreviated at 7 decimals.
Solutions:
Round the value down/up to the nearest value where everything from the 8th decimal onward is 0 (or enough zeroes before another nonzero digit is found)
Query for only so many decimals.
Query the actual (hexadecimal) value, and convert that (instead of the string representation)
Keep the string representation, but round the values before storing and after retrieving to the required amount of decimals.
Discussions:
Both in office and here (at #RobertBaron's answer): this is quite tricky, might have a huge decrease in precision, and is basically a lot of work.
Perhaps this is possible, I don't know.
This would be the cleanest solution, as my colleagues and I agree, however this is a lot of work in developing and testing.
Instead of caring about the value in memory to be equal to the value in the database, we don't care about the value in the database (too much).
In the end, after quite some whiteboard bit-calculations and a lengthy discussion, we've gone with option 4. After we retrieve the [Position].ToString() (for which we've increased the string limit) from the database, we convert that as we're already doing, and as additional step before using it anywhere we round the value to the required amount of decimals. When returning the value to the database, we once again round the value to the amount of decimals, and don't care what the database really does with it.
Essentially, this is option 2, but then on the program-side instead of database-side.
This is only a partial answer.
If by valid bit representation you mean exact bit representation, then this is possible. The decimal numbers that have exact bit representation are 1/2, 1/4, 3/4, 1/8, 3/8, 5/8, 7/8, 1/16, 3/16, ...
The challenge is to characterize among these powers of two, those whose base 10 representation has 7 digits or less, and then to round any base 10 number to the closest of these numbers.
I am posting this in the hope that it may get you one step further toward a solution.
If you cannot change the data type into a DECIMAL for whatever reasons, you have to cast it into a DECIMAL every time you need the value. It's that simple. And you can either do it on the SQL Server side or in VB.NET, but you need a DECIMAL. DOUBLEs are imprecise.
By the way, it is not the SQL Server that rounds towards the nearest number it recognizes by adding a bunch of digits - it's the processor that does it. That's also why you may get slightly different DOUBLE values after restoring your database on another server.
And never ever even think of using them as an ID: I know an application that uses FLOAT values containing the timestamp (<creation day since whatever>.<time as fractals of the day>) as part of the primary key (of nearly every table!). Every 10000th record or so cannot be addressed directly by its ID because the value differs somewhat on the client that sends the query and the server by some nanoseconds although the number looks exactly the same in SSMS on the client and the server.

Hash GPS coordinates to be unrecoverable yet still representative?

So I would like to write a program that allow computers that know their GPS locations, to store their location into a database as well as put out the 'locations' of the other computers and see which other similar computers are within a certain approximate range.
How ever I would also like people to be able to access that database without knowing the exact location of other computers, just which ones are an approximate range, it can be a wide variety of ranges, say all computers within 3 and no computers more than 5 miles away but the ones in the middle can go either way. I'm thinking I can somehow hash it before putting it into the database. Any thoughts as to how I can do that?
Thank you!
An coordinate pair (latitude, longitude) can be represented by two integers (each 4 byte) by multiplying with 10 000 000. (1E7)
Now your task is reduced to encrypt two integers.

Quantity reference type, etc

I have been working on ADempiere these past few days and I am confused about something.
I created a new column on my database table named Other_Number with the reference type Quantity. Max length is 20.
On my Java source, I used BigDecimal.
Now every time I try to input exactly 20 digits on the Other_Number field, the last 4 digits gets rounded. Say if I input 12345678901234567891. When I try to save it, it becomes 12345678901234567000.
Other than that. All the records that gets saved on the database (PSQL) gets appended with ".000000000000" (that's 12 zeros).
Now I need to do something so that when I input 20 digits, the last 4 digits don't get rounded.
Also I need to get rid of that ".000000000000"
Can you please tell me why this is happening?
ADempiere as a financials ERP software is crucial in how it deals with financial amounts. In the database the exact BigDecimal value has to maintain its data integrity. Precision and rounding has been done as perfect as possible in code. Been part of the established famous project Compiere ERP, where iDempiere and Openbravo are also forks from, such financial amount management is already well defined and solved.
Perhaps you need to set precision in its appropriate window http://wiki.idempiere.org/en/Currency_%28Window_ID-115%29
If it's not actually a number you want but rather some kind of reference field that contains only numeric digits, change the definition in the Application Dictionary to be:
Reference: String
Length: 20
Value Format: 00000000000000000000 (i.e. 20 Zeros!)
This will force the input be numeric only (i.e. alpha characters will be ignored!) and because it is a String there will be no rounding
Adempiere will support upto 14(+5) digits (trillions) amount/quantity of business (USD currency).
What currency you are using, is it possible to use this much amount/quantity in ERP system ?
If you want to change the logic, then you can change logic at the getNumberFormat method of DispalyType.java class.
What was the business scenario?
In Adempiere java code "setScale" Method is used to rounded the value
Example:
BigDecimal len= value
len= len.setScale(2,4);
setLength(len);

geohash string length and accuracy

if length of geohash string is more, it is more accurate. But is there any direct relationship like if length is 7 it is providing 100 meter accuracy,
i.e. if two geohash (and either of their bounding box) is having first 7 char matching, both should be near 100 meter etc?
I am using geohash for finding, all near-by location for given geohash, with their distance
Also any directway to calculate distance between two geo-hash? (one way is to decode them to lat/lng, and then calculate distance)
Thanks
Saw a lot of confusion around geohashing so I am posting my understanding so far.
The principle behind geohash is very simple, you can create your own version.
For instance consider following geo-point,
156.34234534,-23.343423345
In the above example, 156 represents degrees, 2 digits after decmal (34) represents
decimal minute and rest, (34.5334) represents seconds.
If you remember school geography circumference of earth at equator is about 40,000kms and,
number of degrees around the earth (latitudes or longitudes) is 360. So at the widest
point each degree of latitude and longitude span equals to about 110kms (40,000/360).
So if you encode the above coordinates as, "156-23" (including negative sign), this will give you (110kmx110km) box.
You can go on and increase the precision,
Fist digit of minute (156.3-23.3) will give you (10kmx10km) box (each minute span equals 1km).
Increase this to include first digit of second you get (100mx100m)box,
each extra digit will add precision to another degree.
Geohashing is just the way to represent the above figure in an encoded form. You can happily use the above format as well!
Was curious about this myself.
If its any good to anyone I put together a spreadsheet here
Not 100% sure its right - feel free to comment if you find a problem.
Judging by graph below, using 6 to 10 digits gives accuracy ~1km to ~1m at 60 degree lat.
Here are the formulas for height and width in degrees of a geohash of length n characters:
First define this function:
parity(n) = 0 if n is even otherwise 1
Then
height = 180 / 2(5n-parity(n))/2 degrees
width = 180 / 2(5n+parity(n)-2)/2 degrees
Note that this is the height and width in degrees only. To convert this to metres requires that you know where on the earth the hash is.
Code for this in java is at http://github.com/davidmoten/geo.
Also any directway to calculate distance between two geo-hash? (one way is to decode them to lat/lng, and then calculate distance)
That is what you should do. Think of the geohash as just another representation of a latitude and longitude as a pair of printed decimal numbers are likewise. If I gave you a pair of lat & lon strings, you would parse them to numbers (in your programming language of choice), and then do the math. It's no different with geohashes -- decode to lat & lon then do the math.
Be very careful with any reasoning you are attempting to do with inferring closeness based on the length of the common prefix between a pair of points. If there is a long common prefix, then they are close, but the converse is not true! -- i.e. two points with no common prefix could be a millimeter apart.
Here is an equation (in pseudocode) that can approximate the optimal Geohash length for a latitude/longitude pair having a certain precision:
geohash_length = FLOOR ( LOG_2(5000000/precision_in_meters) / 2,5 + 1 )
if geohash_length > 12 then geohash_length = 12
if geohash_length < 1 then geohash_length = 1
I've used it to create the optimal Geohash from data received by the gpsddaemon, which also provide precision information via the epx and epy values.

Resources