Problems with PostGIS CTE - postgis

I am trying to run the with-as example on slide 22 from
http://www.postgis.us/downloads/oscon2009_PostGISTips.pdf
When I paste the example in the SQL-shell I get the following error:
ERROR: operator does not exist: record - integer LINE 14: ... x, y,
ST_SetSRID(ST_MakeBox2d(ST_Point(xmin + (x - 1)*g_wid...
HINT: No operator matches the given name and argument type(s). You might need to add
explicit type casts
I am using POSTGIS 1.4.1 with Postgres 8.4
I used the data that accompany the slides
http://www.bostongis.com/downloads/oscon2009/oscon2009_src.zip
I also tried the states data from the web with:
shp2pgsql -s 2163 statesp020.shp public.states > states.sql
psql -U postgres -d postgis -f states.sql
But got same error.
Any advice will be much appreciated.

The error "record - integer" (x - 1) can be fixed by giving the result from generate_series the intended aliases x and y. Find this part:
...
(SELECT generate_series(1,x_gridcnt) FROM usext) As x CROSS JOIN
(SELECT generate_series(1,y_gridcnt) FROM usext) As y CROSS JOIN
...
where the x and y values were actually called x.generate_series and y.generate_series.
Change this to:
...
(SELECT generate_series(1,x_gridcnt) As x FROM usext) As x CROSS JOIN
(SELECT generate_series(1,y_gridcnt) As y FROM usext) As y CROSS JOIN
...
where they are now called x.x and y.y, or implied as just x and y.

Related

Creating formula in SQL server

Maybe some of you could help me with creation of formula in sql. I need to perform calculations of the result of the expression for all given formulas. The notation of the formula is simple: P (X) means that the expression appends the integer X in parentheses. M (Y) means that the expression subtracts the integer Y in parentheses. The “+” symbol combines elements of the formula.
Example. given formula: P (10) + M (5) + M (3) + P (1). It translates to 10-5-3 + 1 = 3.
The result should look like this:
Those easy formulas can be done in Microsoft SQL Server.
Split the formula in the different parts with STRING_SPLIT and + as separator.
Use REPLACE to apply a negative number sign: P(X) --> X and M(X) --> -X.
Use CONVERT to turn the string parts into numbers.
Add everything up with a SUM aggregation and group by clause.
Sample data
create table input
(
formula nvarchar(50)
);
insert into input (formula) values
('P(10)+M(5)+M(3)+P(1)'),
('P(7)+M(3)+M(4)');
Solution
select i.formula,
sum(convert(int, replace(replace(replace(s.value,'P(', ''),'M(','-'),')',''))) as rez
from input i
cross apply string_split(i.formula, '+') s
group by i.formula;
Result
formula rez
-------------------- ---
P(10)+M(5)+M(3)+P(1) 3
P(7)+M(3)+M(4) 0
Fiddle to see everything in action with intermediate steps.

How to load a csv into a table in Q?

Very new to Q and I am having some issues loading my data into a table following the examples on the documentation.
I am running the following code:
table1: get `:pathname.csv
While it doesn't throw an error, when I run the following command nothing comes up:
select * from table1
Or when selecting a specific column:
select col1 from table1
If anyone could guide me in the right direction, that would be great!
Edit: This seems to work and retain all my columns:
table1: (9#"S";enlist csv) 0: `:data.CSV
You're going to need to use 0: https://code.kx.com/q/ref/filenumbers/#load-csv
The exact usage will depend on your csv, as you need to define the datatypes to load each column as.
As an example, here I have a CSV with a long, char & float column:
(kdb) chronos#localhost ~/Downloads $ more example.csv
abc,def,ghi
1,a,3.4
2,b,7.5
3,c,88
(kdb) chronos#localhost ~/Downloads $ q
KDB+ 3.6 2018.10.23 Copyright (C) 1993-2018 Kx Systems
l64/ 4()core 3894MB chronos localhost 127.0.0.1 EXPIRE 2019.06.15 jonathon.mcmurray#aquaq.co.uk KOD #5000078
q)("JCF";enlist",")0:`:example.csv
abc def ghi
-----------
1 a 3.4
2 b 7.5
3 c 88
q)meta ("JCF";enlist",")0:`:example.csv
c | t f a
---| -----
abc| j
def| c
ghi| f
q)
I use the chars "JCF" to define the datatypes long, character & float respectively.
I enlist the delimiter (",") to indicate that the first row of the CSV contains the headers for the columns. (Otherwise, these can be supplied in your code & the table constructed)
On a side note, note that in q-sql, the * is not necessary as in standard SQL; you can simply do select from table1 to query all columns

LTRIM and RTRIM Truncating Floating Point Number

I am experiencing what I would describe as entirely unexpected behaviour when I pass a float value through either LTRIM or RTRIM:
CREATE TABLE MyTable
(MyCol float null)
INSERT MyTable
values (11.7333335876465)
SELECT MyCol,
RTRIM(LTRIM(MyCol)) lr,
LTRIM(MyCol) l,
RTRIM(MyCol) r
FROM MyTable
Which gives the following results:
MyCol | lr | l | r
--------------------------------------------
11.7333335876465 | 11.7333 | 11.7333 | 11.7333
I have observed the same behaviour on SQL Server 2014 and 2016.
Now, my understanding is that LTRIM and RTRIM should just strip off white space from a value - not cast it/truncate it.
Does anyone have an idea what is going on here?
Just to explain the background to this. I am generating SQL queries using the properties of a set of C# POCOs (the results will be used to generate an MD5 hash that will then be compared to an equivalent value from an Oracle table) and for convenience was wrapping every column with LTRIM/RTRIM.
Perhaps you can use format() instead
Declare #F float = 11.7333335876465
Select format(#F,'#.##############')
Returns
11.7333335876465

Collation for URL

Warning: I know very little about database collations so apologies in advance if any of this is obvious...
We've got a database column that contains urls. We'd like to place a unique constraint/index on this column.
It's come to my attention that under the default db collation Latin1_General_CI_AS, dupes exist in this column because (for instance) the url http://1.2.3.4:5678/someResource and http://1.2.3.4:5678/SomeResource are considered equal. Frequently this is not the case... the kind of server this url points at is case sensitive.
What would be the most appropriate collation for such a column? Obviously case-sensitivity is a must, but Latin1_General? Are urls Latin1_General? I'm not bothered about a lexicographical ordering, but equality for unique indexes/grouping is important.
You can alter table to set CS (Case Sensitive) collation for this column:
ALTER TABLE dbo.MyTable
ALTER COLUMN URLColumn varchar(max) COLLATE Latin1_General_CS_AS
Also you can specify collation in the SQL statement:
SELECT * FROM dbo.MyTable
WHERE UrlColumn like '%AbC%' COLLATE Latin1_General_CS_AS
Here is a short article for reference.
The letters CI in the collation indicates case insensitivity.
For a URL, which is going to be a small subset of latin characters and symbols, then try Latin1_General_CS_AI
Latin1_General uses code page 1252 (1) and URL's allowed characters are included on that code page(2), so you can say that URLs are Latin1_General.
You just have to select the case sensitive option Latin1_General_CS_AS
rfc3986 says:
The ABNF notation defines its terminal values to be non-negative
integers (codepoints) based on the US-ASCII coded character set
[ASCII].
Wikipedia say that allowed chars are:
Unreserved
May be encoded but it is not necessary
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
a b c d e f g h i j k l m n o p q r s t u v w x y z
0 1 2 3 4 5 6 7 8 9 - _ . ~
Reserved
Have to be encoded sometimes
! * ' ( ) ; : # & = + $ , / ? % # [ ]
It seems that they are not conflicts between this chars in compare operations. Also, you can use HASHBYTES function for make this comparation.
But this kind of operation is not the major problem. Major problem is that http://domain:80 and http://domain may be the same. Also with encoded characters, a url may seems different with encoded chars.
In my opinion, RDBMS will incorporate this kind of structures as new data types: url, phone number, email address, mac address, password, latitude, longitude, ... . I think that collation can helps but don't will solve this issue.

"if, then, else" in SQLite

Without using custom functions, is it possible in SQLite to do the following. I have two tables, which are linked via common id numbers. In the second table, there are two variables. What I would like to do is be able to return a list of results, consisting of: the row id, and NULL if all instances of those two variables (and there may be more than two) are NULL, 1 if they are all 0 and 2 if one or more is 1.
What I have right now is as follows:
SELECT
a.aid,
(SELECT count(*) from W3S19 b WHERE a.aid=b.aid) as num,
(SELECT count(*) FROM W3S19 c WHERE a.aid=c.aid AND H110 IS NULL AND H112 IS NULL) as num_null,
(SELECT count(*) FROM W3S19 d WHERE a.aid=d.aid AND (H110=1 or H112=1)) AS num_yes
FROM W3 a
So what this requires is to step through each result as follows (rough Python pseudocode):
if row['num_yes'] > 0:
out[aid] = 2
elif row['num_null'] == row['num']:
out[aid] = 'NULL'
else:
out[aid] = 1
Is there an easier way? Thanks!
Use CASE...WHEN, e.g.
CASE x WHEN w1 THEN r1 WHEN w2 THEN r2 ELSE r3 END
Read more from SQLite syntax manual (go to section "The CASE expression").
There's another way, for numeric values, which might be easier for certain specific cases.
It's based on the fact that boolean values is 1 or 0, "if condition" gives a boolean result:
(this will work only for "or" condition, depends on the usage)
SELECT (w1=TRUE)*r1 + (w2=TRUE)*r2 + ...
of course #evan's answer is the general-purpose, correct answer

Resources