select geometry from geopackage in dbbrowser for sqlite - database

I'm using dbmanager for sqlite with spatialite module loaded and I'm trying to read a geopackage I exported before from qgis 3.12.
I perform this both selections bellow and get the 'geometry' with hex function, and them as supposed WKT text (I'm not sure) but when I export it as csv to read in qgis again, it didn't work. I read some examples from sqlite docs to select it as AsText or AsBinary or other readable type in qgis, but it only return empty field.
Functions here: http://www.gaia-gis.it/gaia-sins/spatialite-sql-4.3.0.html#p16gpkg
First case:
SELECT cd_geocodi, nm_bairro, renmeddom, SRID(geom), hex(geom) from
setorcengeom_rj20171016 where renmeddom > 5000 group by cd_geocodi
order by nm_bairro DESC limit 10;
Second case:
SELECT cd_geocodi, nm_bairro, renmeddom, SRID(geom) as epsg,
AsText(CastAutomagic(geom)) AS geometry from setorcengeom_rj20171016
where renmeddom > 5000 group by cd_geocodi order by nm_bairro DESC
limit 30;

I believe I got the answer! I sucefully export it as CSV and load it in QGIS with no problems.
> SELECT cd_geocodi, nm_bairro, renmeddom, SRID(geom) as epsg,
> AsWKT(CastAutomagic(geom)) AS geometry from setorcengeom_rj20171016
> group by cd_geocodi order by nm_bairro DESC

Related

Query Snowflake Named Internal Stage by Column NAME and not POSITION

My company is attempting to use Snowflake Named Internal Stages as a data lake to store vendor extracts.
There is a vendor that provides an extract that is 1000+ columns in a pipe delimited .dat file. This is a canned report that they extract. The column names WILL always remain the same. However, the column locations can change over time without warning.
Based on my research, a user can only query a file in a named internal stage using the following syntax:
--problematic because the order of the columns can change.
select t.$1, t.$2 from #mystage1 (file_format => 'myformat', pattern=>'.data.[.]dat.gz') t;
Is there anyway to use the column names instead?
E.g.,
Select t.first_name from #mystage1 (file_format => 'myformat', pattern=>'.data.[.]csv.gz') t;
I appreciate everyone's help and I do realize that this is an unusual requirement.
You could read these files with a UDF. Parse the CSV inside the UDF with code aware of the headers. Then output either multiple columns or one variant.
For example, let's create a .CSV inside Snowflake we can play with later:
create or replace temporary stage my_int_stage
file_format = (type=csv compression=none);
copy into '#my_int_stage/fx3.csv'
from (
select *
from snowflake_sample_data.tpcds_sf100tcl.catalog_returns
limit 200000
)
header=true
single=true
overwrite=true
max_file_size=40772160
;
list #my_int_stage
-- 34MB uncompressed CSV, because why not
;
Then this is a Python UDF that can read that CSV and parse it into an Object, while being aware of the headers:
create or replace function uncsv_py()
returns table(x variant)
language python
imports=('#my_int_stage/fx3.csv')
handler = 'X'
runtime_version = 3.8
as $$
import csv
import sys
IMPORT_DIRECTORY_NAME = "snowflake_import_directory"
import_dir = sys._xoptions[IMPORT_DIRECTORY_NAME]
class X:
def process(self):
with open(import_dir + 'fx3.csv', newline='') as csvfile:
reader = csv.DictReader(csvfile)
for row in reader:
yield(row, )
$$;
And then you can read this UDF that outputs a table:
select *
from table(uncsv_py())
limit 10
A limitation of what I showed here is that the Python UDF needs an explicit name of a file (for now), as it doesn't take a whole folder. Java UDFs do - it will just take longer to write an equivalent UDF.
https://docs.snowflake.com/en/developer-guide/udf/python/udf-python-tabular-functions.html
https://docs.snowflake.com/en/user-guide/unstructured-data-java.html

BigQuery or SQL Server SPLIT query

I have searched around and can not find much on this topic. I have a table, that gets logging information. As a result the column I am interested in contains multiple values that I need to search against. The column is formatted in a php URL style. i.e.
/test/test.aspx?DS_Vendor=55039&DS_ProdVer=7.90.100.0&DS_ProdLang=EN&DS_Product=MTT&DS_OfficeBits=32
This makes all searches end up with really long regexes to get data. Then join statements to combine data.
Is there a way in BigQuery, or SQL Server that I can pull the information from that column and put it into new columns?
Example:
The information I would like extracted begins after the ?, and ends at &, The string can sometimes be longer, and contains additional headers.
Thanks,
Below is for BigQuery Standard SQL and addresses below aspect of your question
Is there a way in BigQuery, ... that I can pull the information from that column and put it into new columns?
#standardSQL
CREATE TEMP FUNCTION parseColumn(kv STRING, column_name STRING) AS (
IF(SPLIT(kv, '=')[OFFSET(0)]= column_name, SPLIT(kv, '=')[OFFSET(1)], NULL)
);
WITH `project.dataset.table` AS (
SELECT '/test/test.aspx?extra=abc&DS_Vendor=55039&DS_ProdVer=7.90.100.0&DS_ProdLang=EN&DS_Product=MTT&DS_OfficeBits=32' AS url UNION ALL
SELECT '/test/test.aspx?DS_Vendor=55192&DS_ProdVer=4.30.100.0&more=123&DS_ProdLang=DE&DS_Product=MTE&DS_OfficeBits=64'
)
SELECT
MIN(parseColumn(kv, 'DS_Vendor')) AS DS_Vendor,
MIN(parseColumn(kv, 'DS_ProdVer')) AS DS_ProdVer,
MIN(parseColumn(kv, 'DS_ProdLang')) AS DS_ProdLang,
MIN(parseColumn(kv, 'DS_Product')) AS DS_Product,
MIN(parseColumn(kv, 'DS_OfficeBits')) AS DS_OfficeBits
FROM `project.dataset.table`,
UNNEST(REGEXP_EXTRACT_ALL(url, r'[?&]([^?&]+)')) AS kv
GROUP BY url
with the result as below
Row DS_Vendor DS_ProdVer DS_ProdLang DS_Product DS_OfficeBits
1 55039 7.90.100.0 EN MTT 32
2 55192 4.30.100.0 DE MTE 64
Below is also addressed
The string can sometimes be longer, and contains additional headers.
One example using BigQuery (with standard SQL):
SELECT REGEXP_EXTRACT_ALL(url, r'[?&]([^?&]+)')
FROM (
SELECT '/test/test.aspx?DS_Vendor=55039&DS_ProdVer=7.90.100.0&DS_ProdLang=EN&DS_Product=MTT&DS_OfficeBits=32' AS url
)
This returns the parts of the URL as an ARRAY<STRING>. To go one step further, you can get back an ARRAY<STRUCT<key STRING, value STRING>> with a query of this form:
SELECT
ARRAY(
SELECT AS STRUCT
SPLIT(part, '=')[OFFSET(0)] AS key,
SPLIT(part, '=')[OFFSET(1)] AS value
FROM UNNEST(REGEXP_EXTRACT_ALL(url, r'[?&]([^?&]+)')) AS part
) AS keys_and_values
FROM (
SELECT '/test/test.aspx?DS_Vendor=55039&DS_ProdVer=7.90.100.0&DS_ProdLang=EN&DS_Product=MTT&DS_OfficeBits=32' AS url
)
...or with the keys and values as top-level columns:
SELECT
SPLIT(part, '=')[OFFSET(0)] AS key,
SPLIT(part, '=')[OFFSET(1)] AS value
FROM (
SELECT '/test/test.aspx?DS_Vendor=55039&DS_ProdVer=7.90.100.0&DS_ProdLang=EN&DS_Product=MTT&DS_OfficeBits=32' AS url
)
CROSS JOIN UNNEST(REGEXP_EXTRACT_ALL(url, r'[?&]([^?&]+)')) AS part

Creating XML Schema for Bulk Load to SQL Server - Child Element Describes Parent

I have an XML document that I'm working to build a schema for in order to bulk load these documents into a SQL Server table. The XML I'm focusing on looks like this:
<Coverage>
<CoverageCd>BI</CoverageCd>
<CoverageDesc>BI</CoverageDesc>
<Limit>
<FormatCurrencyAmt>
<Amt>30000.00</Amt>
</FormatCurrencyAmt>
<LimitAppliesToCd>PerPerson</LimitAppliesToCd>
</Limit>
<Limit>
<FormatCurrencyAmt>
<Amt>85000.00</Amt>
</FormatCurrencyAmt>
<LimitAppliesToCd>PerAcc</LimitAppliesToCd>
</Limit>
</Coverage>
<Coverage>
<CoverageCd>PD</CoverageCd>
<CoverageDesc>PD</CoverageDesc>
<Limit>
<FormatCurrencyAmt>
<Amt>50000.00</Amt>
</FormatCurrencyAmt>
<LimitAppliesToCd>Coverage</LimitAppliesToCd>
</Limit>
</Coverage>
Inside the Limit element, there's a child LimitAppliesToCd that I need to use to determine where the Amt element's value actually gets stored inside my table. Is this possible to do using the standard XML Bulk Load feature of SQL Server? Normally in XML I'd expect that the element would have an attribute containing the "PerPerson" or "PerAcc" information, but this standard we're using does not call for that.
If anyone has worked with the ACORD standard before, you might know what I'm working with here. Any help is greatly appreciated.
Don't know exactly what you are talking about, but this is a solution to get the information out of your XML.
Assumption: Your XML is already bulk-loaded into a declared variable #xml of type XML:
A CTE will pull the information out of your XML. The final query will then use PIVOT to put your data into the right column.
With a fitting table's structure the actual insert should be simple...
WITH DerivedTable AS
(
SELECT cov.value('CoverageCd[1]','varchar(max)') AS CoverageCd
,cov.value('CoverageDesc[1]','varchar(max)') AS CoverageDesc
,lim.value('(FormatCurrencyAmt/Amt)[1]','decimal(14,4)') AS Amt
,lim.value('LimitAppliesToCd[1]','varchar(max)') AS LimitAppliesToCd
FROM #xml.nodes('/root/Coverage') AS A(cov)
CROSS APPLY cov.nodes('Limit') AS B(lim)
)
SELECT p.*
FROM
(SELECT * FROM DerivedTable) AS tbl
PIVOT
(
MIN(Amt) FOR LimitAppliesToCD IN(PerPerson,PerAcc,Coverage)
) AS p

Parse json arrays using HIVE

I have many json arrays stored in a table (jt) that looks like this:
[{"ts":1403781896,"id":14,"log":"show"},{"ts":1403781896,"id":14,"log":"start"}]
[{"ts":1403781911,"id":14,"log":"press"},{"ts":1403781911,"id":14,"log":"press"}]
Each array is a record.
I would like to parse this table in order to get a new table (logs) with 3 fields: ts, id, log.
I tried to use the get_json_object method, but it seems that method is not compatible with json arrays because I only get null values.
This is the code I have tested:
CREATE TABLE logs AS
SELECT get_json_object(jt.value, '$.ts') AS ts,
get_json_object(jt.value, '$.id') AS id,
get_json_object(jt.value, '$.log') AS log
FROM jt;
I tried to use other functions but they seem really complicated.
Thank you! :)
Update!
I solved my issue by performing a regexp:
CREATE TABLE jt_reg AS
select regexp_replace(regexp_replace(value,'\\}\\,\\{','\\}\\\n\\{'),'\\[|\\]','') as valuereg from jt;
CREATE TABLE logs AS
SELECT get_json_object(jt_reg.valuereg, '$.ts') AS ts,
get_json_object(jt_reg.valuereg, '$.id') AS id,
get_json_object(jt_reg.valuereg, '$.log') AS log
FROM ams_json_reg;
I just ran into this problem, with the JSON array stored as a string in the hive table.
The solution is a bit hacky and ugly, but it works and doesn't require serdes or external UDFs
SELECT
get_json_object(single_json_table.single_json, '$.ts') AS ts,
get_json_object(single_json_table.single_json, '$.id') AS id,
get_json_object(single_json_table.single_json, '$.log') AS log
FROM ( SELECT explode (
split(regexp_replace(substr(json_array_col, 2, length(json_array_col)-2),
'"}","', '"}",,,,"'), ',,,,')
) FROM src_table) single_json_table;
I broke the lines up so that it would be a little easier to read.
I'm using substr() to strip the first and last characters, removing [ and ] . I'm then using regex_replace to match the separator between records in the json array and adding or changing the separator to be something unique that can then be used easily with split() to turn the string into a hive array of json objects which can then be used with explode() as described in the previous solution.
Note, the separator regex used here ( "}"," ) wouldn't work with the original data set...the regex would have to be ( "},\{" ) and the replacement would then need to be "},,,,{" eg..
split(regexp_replace(substr(json_array_col, 2, length(json_array_col)-2),
'"},\\{"', '"},,,,{"'), ',,,,')
Use explode() function
hive (default)> CREATE TABLE logs AS
> SELECT get_json_object(single_json_table.single_json, '$.ts') AS ts,
> get_json_object(single_json_table.single_json, '$.id') AS id,
> get_json_object(single_json_table.single_json, '$.log') AS log
> FROM
> (SELECT explode(json_array_col) as single_json FROM jt) single_json_table ;
Automatically selecting local only mode for query
Total MapReduce jobs = 3
Launching Job 1 out of 3
Number of reduce tasks is set to 0 since there's no reduce operator
hive (default)> select * from logs;
OK
ts id log
1403781896 14 show
1403781896 14 start
1403781911 14 press
1403781911 14 press
Time taken: 0.118 seconds, Fetched: 4 row(s)
hive (default)>
where json_array_col is column in jt which holds your array of jsons.
hive (default)> select json_array_col from jt;
json_array_col
["{"ts":1403781896,"id":14,"log":"show"}","{"ts":1403781896,"id":14,"log":"start"}"]
["{"ts":1403781911,"id":14,"log":"press"}","{"ts":1403781911,"id":14,"log":"press"}"]
because get_json_object doesn't support json array string, so you can concat to a json object, like this:
SELECT
get_json_object(concat(concat('{"root":', jt.value), '}'), '$.root')
FROM jt;

Source data type "200" not found error when exporting query results to excel Microsoft SQL Server 2012

I am very new to Microsoft SQL Server and am using 2012 Management Studio. I get the error above when I try to export query results to an excel file using the wizard. I have seen solutions posted elsewhere for this error but do not know enough to figure out how to implement the solutions recommended. Can somebody please walk me through one of these solutions step by step?
I believe my problem is that the SQL Server Import and Export Wizard Does Not Recognise Varchar and NVarchar which I believe is the data type for the columns that I am receiving errors for.
Source Type 200 in SQL Server Import and Export Wizard?
http://connect.microsoft.com/SQLServer/feedback/details/775897/sql-server-import-and-export-wizard-does-not-recognise-varchar-and-nvarchar#
Query:
SELECT licenseEntitlement.entID, licenseEntitlement.entStartDate, entEndDate, quote.quoteId, quote.accountId, quote.clientId, quote.clientName, quote.contactName,
quote.contactEmail, quote.extReference, quote.purchaseOrderNumber, quote.linkedTicket
FROM licenseEntitlement INNER JOIN
quote ON quote.quoteId = SUBSTRING(licenseEntitlement.entComments, 12, PATINDEX('% Created%', licenseEntitlement.entComments) - 12)
inner join sophos521.dbo.computersanddeletedcomputers on computersanddeletedcomputers.name = entid and IsNumeric(computersanddeletedcomputers.name) = 1
WHERE (licenseEntitlement.entType = 'AVS') AND (licenseEntitlement.entComments LIKE 'OV Order + %') and entenddate < '4/1/2014'
ORDER BY licenseEntitlement.entEndDate
Error:
TITLE: SQL Server Import and Export Wizard
------------------------------
Column information for the source and the destination data could not be retrieved, or the data types of source columns were not mapped correctly to those available on the destination provider.
[Query] -> `Query`:
- Column "accountId": Source data type "200" was not found in the data type mapping file.
- Column "clientId": Source data type "200" was not found in the data type mapping file.
- Column "clientName": Source data type "200" was not found in the data type mapping file.
- Column "contactName": Source data type "200" was not found in the data type mapping file.
- Column "contactEmail": Source data type "200" was not found in the data type mapping file.
- Column "extReference": Source data type "200" was not found in the data type mapping file.
- Column "purchaseOrderNumber": Source data type "200" was not found in the data type mapping file.
- Column "linkedTicket": Source data type "200" was not found in the data type mapping file.
If any more details are needed please let me know
So, implementing the suggestion at the StackOverflow link you gave, of turning the query into a View, here's an example of what that could look like (with some code formatting ;) --
CREATE VIEW [dbo].[test__View_1]
AS
SELECT LIC.entID, LIC.entStartDate, entEndDate,
quote.quoteId, quote.accountId, quote.clientId, quote.clientName,
quote.contactName, quote.contactEmail, quote.extReference,
quote.purchaseOrderNumber, quote.linkedTicket
FROM [dbo].licenseEntitlement LIC WITH(NOLOCK)
INNER JOIN [dbo].quote WITH(NOLOCK)
ON quote.quoteId = SUBSTRING(LIC.entComments, 12,
PATINDEX('% Created%', LIC.entComments) - 12)
INNER JOIN sophos521.dbo.computersanddeletedcomputers COMPS WITH(NOLOCK)
ON COMPS.name = entid and IsNumeric(COMPS.name) = 1
WHERE (LIC.entType = 'AVS')
AND (LIC.entComments LIKE 'OV Order + %')
and (entenddate < '4/1/2014')
ORDER BY LIC.entEndDate
GO
Then, you would export from test__View_1 (or whatever real name you choose for it), as if test__View_1 was the table name.
FYI, after the first time you've executed the above -- after you've "created" the view -- then from then on, the view's first line (during modifications) changes, from CREATE VIEW, to ALTER VIEW.
((And, aside from the bug question... in your WHERE clause, did you intend entComments LIKE 'OV Order + %', or was that really intended to be entComments LIKE 'OV Order%'? I've made that change, in the alternative example code, below.))
Note: if you're going to be exporting repeatedly (or re-using) the output from one run, and especially if your query is slow or hogs the machine... then instead of a VIEW, you might prefer a SELECT INTO, to create a table once, which can be quickly re-used. (I would also choose SELECT INTO rather than CREATE VIEW, when developing a one-time-only query for export.)
IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_NAME = 'zz_LIC_ENT_DETAIL')
DROP TABLE [dbo].zz_LIC_ENT_DETAIL
SELECT LIC.entID, LIC.entStartDate, LIC.entEndDate,
quote.quoteId, quote.accountId, quote.clientId, quote.clientName,
quote.contactName, quote.contactEmail, quote.extReference,
quote.purchaseOrderNumber, quote.linkedTicket
INTO [dbo].zz_LIC_ENT_DETAIL
FROM [dbo].licenseEntitlement LIC WITH(NOLOCK)
INNER JOIN [dbo].quote WITH(NOLOCK)
ON quote.quoteId = SUBSTRING(LIC.entComments, 12,
PATINDEX('% Created%', LIC.entComments) - 12)
INNER JOIN sophos521.dbo.computersanddeletedcomputers COMPS WITH(NOLOCK)
ON COMPS.name = LIC.entid and IsNumeric(COMPS.name) = 1
WHERE (LIC.entType = 'AVS')
AND (LIC.entComments LIKE 'OV Order%')
and (LIC.entenddate < '4/1/2014')
ORDER BY LIC.entEndDate
Then, you would of course export from table zz_LIC_ENT_DETAIL (or whatever table name you chose).
Hope that helps...
It might be easier to right click query results window and choosing Save Results As (CSV)..
To append the column names in the first row you'd also need to modify your query in this way (note the cast for int or datetime columns):
select 'col1', 'col2', 'col3'
union all
select cast(id as varchar(10)), name, cast(someinfo as varchar(28))
from Question1355876

Resources