I have redash setup and I am able to connect to gsheet datasource but when I attempt a select query
select * from 1YaipA_nhUq5zl37EZ9tFa32qc4kgF1cMlo41ch1lcF4
I am getting
Error running query: Spreadsheet (select * from 1YaipA_nhUq5zl37EZ9tFa32qc4kgF1cMlo41ch1lcF4) not found. Make sure you used correct id.
I have shared the sheet with the service account ID and it's a simple sheet I am using for testing. https://docs.google.com/spreadsheets/d/1YaipA_nhUq5zl37EZ9tFa32qc4kgF1cMlo41ch1lcF4/edit?usp=sharing
I know I have had this working in the past, must be missing something simple. Thanks in advance.
After checking Redash's implementation of Google Sheet query runner, in order to access the sheet, the query format should be a one-liner:
SpreadsheetID|SheetNumber
In your case, that would be:
1YaipA_nhUq5zl37EZ9tFa32qc4kgF1cMlo41ch1lcF4
to access the first worksheet by default. Alternatively, you may want to be more specific:
1YaipA_nhUq5zl37EZ9tFa32qc4kgF1cMlo41ch1lcF4|1
for accessing second worksheet.
Keep in mind that Redash don't execute your query on-the-fly, rather it will load the whole worksheet before you can make further processing according to the official documentation.
After data loading, you should be able to see something like this:
It simply means that Redash had loaded your data (in this case, 1,962 rows) and you can start to make some visualizations. Cheers!
Related
I want to optimize my dataflow reading just data I really need.
I created a dataset that maps a view on my database. This dataset is used by different dataflow so I need a generic projection.
Now I am creating a new dataflow and I want to read just a subset of the dataset.
Here how I created the dataset:
And that is the generic projection:
Here how I created the data flow. That is the source settings:
But now I want just a subset of my dataset:
It works but I think I am doing wrong:
I wanto to read data from my dataset (as you can see from source settings tab), but when I modify the projection I read from the underlying table (as you can see from source option). It seems an inconsistence. Which is the correct way to manage this kind of customization?
Thank you
EDIT
The solution proposed does not solve my problem. If I go in monitor and I analyze the exections that is what I saw...
Before I had applyed the solution proposed and with the solution I wrote above I got this:
As you can see I had read just 8 columns from database.
With the solution proposed, I get this:
And just then:
Just to be clear, the purpose of my question is:
How ca n I read only the data I really need instead of read all data and filter them in second moment?
I found a way (explained in my question) but there is an inconsistency with the configuration of the dataflow (I set a dataflow as input but in the option I write a query that read from db).
First import data as a Source.
You can use Select transformation in DataFlow activity to select CustomerID from imported dataset.
Here you can remove unwanted columns.
Refer - https://learn.microsoft.com/en-us/azure/data-factory/data-flow-select
I have a Google Sheet document that I only have read access to.
It has a set of workers in it. One of the fields is for "job location", and another is for "house location". When these fields don't match, the worker is "remote".
I'm trying to add a calculated column to a data source in Google Data Studio, but I can't find any string function that checks for equivalence, and just going J=K doesn't work.
The CASE operator isn't able to compare columns either.
Is there a way to make a formula determine if two fields are equivalent?
For future reference, the feature was introduced in the 07 Jan 2021 update; thus using the fields specified in the question (job location and house location), the CASE statement below does the trick:
CASE
WHEN NOT job location = house location THEN "remote"
ELSE "not remote"
END
Editable Google Data Studio Report and a GIF to elaborate:
Currently, there is no direct solution in Data Studio to do this.
However, you can take one of two approaches:
Create a new Google Sheet. Use IMPORTRANGE to bring in entire dataset from the source Sheet and then add the comparison column in this worksheet. Use ARRAYFORMULA to extend the formula all the way to the end. (e.g. =ARRAYFORMULA(D:D=E:E) - can be further polished) This Sheet can then work as your data source.
Create a Community Connector to fetch data from the Sheet using the Sheets Service. Add the comparison as a column in Apps Script.
I have searched Google and this site for about 2 hours trying to gather how to do this and no luck on a way that fits/ I understand. As the title says, I need to export table data to an XML file. I have an Azure SQL database with table data.
Table name: District
Table Columns: Id, name, organizationType, address, etc.
I need to take this data and create a XML file that I can save so that it can be given to others.
I have tried using:
SELECT *
FROM dbo.District
FOR XML PATH('districtEntry'), ROOT('leaID')
It gives me the data in XML format, but I don't see a way to save it.
Also, there are some functions I need to be able to perform with the data:
Program should have these options:
1) Export all data.
2) Export all rows created or updated since a specified date.
Files should be named in format ENTITY.DATE.XML, as in
DISTRICT.20150521.XML (use date in YYYYMMDD format).
This leads me to believe I need to write code other than SQL since a requirement would be to query the table for certain data elements as well.
I was wondering if I would need to download any Database Server Data Tools, write code, and if so, in what language, etc. The XML file creation would need to be automated I believe after every update of the table or after a query.
I am very confused and in need of guidance as I now have almost given up hope. Please let me know if I need to clarify anything. Thank you.
P.S. I would have given pictures but I do not have enough reputation to supply them.
I would imagine you're looking to write a program in VB.NET or C#, using ADO.NET in either case. Here's an MSDN article with a complete sample of how to connect to and query SQL Azure:
https://msdn.microsoft.com/en-us/library/azure/ee336243.aspx
The example shows how to write the output to the Console, but you could also write the output similarly using something like a StreamWriter to write it to a file.
You could also create a sqlcmd script to do this, following the guidelines here to connect using sqlcmd:
https://msdn.microsoft.com/en-us/library/azure/ee336280.aspx
Alternatively, if this is a process that does not need to be automated or repeated frequently, you could do it using SSMS:
http://azure.microsoft.com/en-us/documentation/articles/sql-database-manage-azure-ssms/
Running your query through SSMS would produce an XML document, which could be saved using File->Save As
I've installed Squeryl Client and can easily access my iSeries DB2/400 and select and see the data within the tables. However, it seems I have to modify the URL in the alias every time I want to change from one library(database) to another. If I want to query a file(table) from library(database) "LibraryA", I use URL "jdbc:as400://www.system.com/LibraryA". If I want to query a file(table) from library(database) "LibraryB", I use URL "jdbc:as400://www.system.com/LibraryB". Even when I try to use a URL with a library list like "jdbc:as400://www.system.com/;libraries=LibraryA LibraryB", it only looks at the first library when trying to access a table in "LibraryB".
When I drag a table to the graph and select some fields, I would expect the sql to qualify the table with the library(database) name. After all, it knows which library the table is being dragged from. The generated sql looks like this:
SELECT
tableB.field1,tableB.field2
FROM tableB
What I would expect is for it to look something like this (iSeries sql syntax):
SELECT
tableB.field1,tableB.field2
FROM LibraryB/tableB
When I try to key over the generated SQL command, it still tries to access the table from
LibraryA.
If I use the URL ""jdbc:as400://www.system.com/", it will try to find a libary(database) named the same as my user ID.
When Squeryl Client can build the objects list showing the library and table I'm selecting, I should think it would be able to build a query to access the correct library as well.
What am I missing?
Thanks
Bob
I seem to have figured this out. I changed the URL to the following:
jdbc:as400://www.system.com/;naming=system; libraries=LibraryA LibraryB
I have done an MS SQL Query in excel.
I have added extra colums in the excel sheet which I want to enter manual
data in.
When I refresh the data, these manually inputted columns become misaligned
to the imported data they refer to.
Is there any around this happening.
I have tried to link the imported data sheet to a manual data sheet via
vlookup but this isn't working as there are no unique fields to link together.
Please help!
Thanks
Excel version is 2010.
MS SQL version is 2005.
There is no unique data.
Because excel firstly looks like this.
when we entered a new order in to database Excel looks like this
Try this: in the External Data Range Properties, select "Insert entire rows for new data".
Not sure, but worth a try. And keep us updated of the result !
edit: And make sure you provide a consistent sort order.
There is no relationship to the spreadsheets external data and the columns you are entering. When refreshing typically the data is cleared and updated though there are other options in the external data refresh menu you could play with. You could play around with the External data options in the menu to see if changing the settings on what happens with the new data would help.
If you want your manually entered data to link to the data in the embedded dataset, you have to establish the lookup with a vlookup or some formula to find the rows info and show it.
Basically you are thinking the SQL data on the spreadsheet is static, but it isn't unless you never refresh it or disconnect it from the database
note that Marcel Beug has given a full solution to this problem in a more recent post in this forum # Inserting text manually in a custom column and should be visible on refresh of the report
he has even taken the time to record an example in a video # https://www.youtube.com/watch?v=duNYHfvP_8U&feature=youtu.be