I like the idea of Sqlite but I'm more comfortable with PostgreSQL, Mysql, even MS Access or Oracle.
I've got something written by someone else which generates Sqlite databases that include a date/time field and I want to get those into a format that Gnuplot can understand. Both Sqliteman and Sqlite browser show the field is an integer, and it looks like a unix time_t when I just query it, except it's 3 digits longer, like 1444136564028.
It doesn't have to be done by piping sqlite3 into Gnuplot, and it doesn't have to use the unixepoch/%s time format. I just can't find any examples of converting the Sqlite time fields in a query. One example "SELECT strftime('%s','now')" works, but when I replace now with a field in a real query it doesn't work. All the examples I find seem to use immediate/literal values, not fields from queries.
And can Sqlite use a tablename.fieldname format or does it have to be select fieldname from tablename?
Unix timestamps use seconds; using milliseconds is a common Java quirk:
> WITH MyLittleTable(d) AS (VALUES(1444136564028))
SELECT datetime(d / 1000, 'unixepoch') FROM MyLittleTable;
2015-10-06 13:02:44
Related
I transferred an Oracle database to SQL Server and all seems to have went well. The various ID columns are large numbers so I had to use Decimal as they were too large for BigInt.
I am now trying to read the data using pandas.read_sql using pyodbc connection with ODBC Driver 17 for SQL Server. df = pandas.read_sql("SELECT * FROM table1"),con)
The numbers are coming out as float64 and when I try to print them our use them in SQL statements they come out in scientific notation and when I try to use '{:.0f}'.format(df.loc[i,'Id']) It turns several numbers into the same number such as 90300111000003078520832. It is like precision is lost when it goes to scientific notation.
I also tried pd.options.display.float_format = '{:.0f}'.format before the read_sql but this did not help.
Clearly I must be doing something wrong as the Ids in the database are correct.
Any help is appreciated Thanks
pandas' read_sql method has an option named coerce_float which defaults to True and it …
Attempts to convert values of non-string, non-numeric objects (like decimal.Decimal) to floating point, useful for SQL result sets.
However, in your case it is not useful, so simply specify coerce_float=False.
I've had this problem too, especially working with long ids: read_sql works fine for the primary key, but not for other columns (like the retweeted_status_id from Twitter API calls). Setting coerce_float to false does nothing for me, so instead I cast retweeted_status_id to a character format in my sql query.
Using psql, I do:
df = pandas.read_sql("SELECT *, Id::text FROM table1"),con)
But in SQL server it'd be something like
df = pandas.read_sql("SELECT *, CONVERT(text, Id) FROM table1"),con)
or
df = pandas.read_sql("SELECT *, CAST(Id AS varchar) FROM table1"),con)
Obviously there's a cost here if you're asking to cast many rows, and a more efficient option might be to pull from SQL server without using pandas (as a nested list or JSON or something else) which will also preserve your long integer formats.
On SQL queries zeppelin 0.8.1 provide table output and several visualizations of data out of the box:
And it is very useful most of the time.
But sometimes I want just select text for presentation.
Said for query SELECT version();. There table output is annoying:
What very interesting, there already implemented text output, for example for EXPLAIN:
Off course ideally for EXPLAIN query you may also expect more visualize for nods, cost and so on, but it is absolutely another question.
So, main question: How I can switch output to text form for some of my SQL queries except explain but in similar form?
Additionally, If I run some maintenance commands like VACUUM and ANALYZE I can see output in many IDE, but in zeppelin it is empty!
/*'EXPLAIN '*/ select version();
An ugly workaround can be used while JDBCInterpreter contains EXPLAIN_PREDICATE
private static final String EXPLAIN_PREDICATE = "EXPLAIN ";
String results = getResults(resultSet,
!containsIgnoreCase(sqlToExecute, EXPLAIN_PREDICATE), isComplete);
In future it's will be nice to manage output type via paragraph properties.
VACUUM and ANALYZE send messages which should be caught via Statement#getWarnings
In a project I'm working on, I need to stream potentially large data sets from a Postgres database to the client, for analytics purposes.
The application is built in Rails (irrelevant for this question) and after a bit of research I'm currently able to stream query results by using COPY in Postgres:
COPY (SELECT row_to_json(t) from (#{query}) t) TO STDOUT;
Sources (for who's interested):
https://shift.infinite.red/fast-csv-report-generation-with-postgres-in-rails-d444d9b915ab
https://github.com/brianhempel/stream_json_demo
This works, but it yields every row as a key-value pair, e.g.:
["{\"id\":403457,\"email\":\"email403457#example.com\",\"first_name\":\"Firstname403457\",\"last_name\":\"Lastname403457\",\"source\":\"adwords\",\"created_at\":\"2015-08-05T22:43:07.295796\",\"updated_at\":\"2017-01-19T04:48:29.464051\"}"]
In the spirit of minimising the size (in bytes) of the response and especially since this is getting served through the web, I want to return just an array of values for every row, i.e.:
["[403457, \"email403457#example.com\", \"Firstname403457\", \"Lastname403457\", \"adwords\", \"2015-08-05T22:43:07.295796\", \"2017-01-19T04:48:29.464051\"]"]
Is there a way to achieve this within Postgres, even by nesting functions, starting from the query above?
You could create a simple SQL function that converts a row into the desired format:
CREATE FUNCTION row2json(anyelement) RETURNS json
LANGUAGE sql STABLE AS
'SELECT json_agg(z.value) FROM json_each(row_to_json($1)) z';
Then you use that to transform the output:
SELECT row2json(mytab) FROM mytab;
If performance is more important than JSON output, just cast the result to a string:
SELECT CAST(mytab AS text) FROM mytab;
All..
I am currently using some systems that has an Informix DB on some older IBM AIX OS based systems. I have found myself needing to use the command line "dbaccess" feature to make some quick queries. Informix has this really annoying habit of return output in this format:
employee -1
record_desc Update
field_id 2
value
opr_activity_date 20150831
opr_activity_time 1
employee -1
record_desc Update
field_id 2
value
opr_activity_date 20150831
opr_activity_time 1
employee -1
record_desc Update
field_id 2
value
opr_activity_date 20150831
opr_activity_time 1
MySQL, MSSQL, etc.. all output something more readable in table format..
city state zipcode
Sunnyvale CA 94086
San Francisco CA 94117
Palo Alto CA 94303
Redwood City CA 94026
Los Altos CA 94022
Mountain View CA 94063
Palo Alto CA 94304
Redwood City CA 94063
I noticed that Informix will/can output in a column/table format, but I have not figured out any rhyme or reason as to how it decides the flat versus the table format.
Any idea how I can force Informix to always display in column/table output via the command line?
Obviously, this is not an issue when I am near my computer and can use my GUI tool to query the DB...
Unfortunately, there's no way to control this behaviour in DB-Access.
If the width of the selected columns (plus a little white space) exceeds the width of the terminal, DB-Access switches to that block format, because it doesn't support sideways scrolling. That's the rhyme and reason.
You can try messing around with your terminal settings so that DB-Access is aware on start-up that the terminal width is wider than 80 characters, but I've always found there's more luck than science to that, and you'll still trigger the behaviour on some queries and not others.
When I need to do what you're describing - ad hoc, simple queries for troubleshooting etc - I tend to work within VIM rather than DB-Access, and use a macro to run the query and format the output. (This is using DBI::Shell behind the scenes.) I've also got a program that accepts either a table name or SQL statement and outputs tab-delimited, CSV or an old-school ASCII character formatted table of the results. This is also perl based. I could publish either of these if there's interest in them.
I think Jonathan Leffler's SQLCMD program can also be used in place of DB-Access to generate arbitrarily wide output.
Ok..
While I found the answers RET provided to be correct and pretty much sums up what I have been able to find on the net, I also found some work-arounds that enable the ability to get what you want, but in a kludgy way! Thanks Informix! :(
Open two terminal windows to your DB system, and launch the dbaccess and authenticate and connect to your database.
Next perform the following:
unload to /home/(user)/out ...the query...
Example:
unload to /home/jewettg/out select * from books_checked_in;
It will output the query results to the file and return the row-count of the return result.
On the second terminal, and here is the cool thing, run the following command:
column -t -s '|' /home/(user)/out
This will grab the content of the "out" file, and convert the pipe-delimited content to space-delimit content and output it to the screen.
Like I said, kludgy, but it works!
You can do this by setting the DBACCESS_COLUMNS environment variable. It is supported from version 12.10.xC9.
Example:
export DBACCESS_COLUMNS=1000
I'm tasked with exporting data from an old application that is using SQL Anywhere, apparently version 5, maybe 5.6. I never worked with this database before so I'm not sure where to start here. Does anybody have a hint?
I'd like to export it in more or less any text representation that then I can work with. Thanks.
I ended up exporting the data by using isql and these commands (where #{table} is each of the tables, a list I built manually):
SELECT * FROM #{table};
OUTPUT TO "C:\export\#{table}.csv" FORMAT ASCII DELIMITED BY ',' QUOTE '"' ALL;
SELECT * FROM #{table};
OUTPUT TO "C:\export\#{table}.txt" FORMAT TEXT;
I used the CVS to import the data itself and the txt to pick up the name of the fields (only parsing the first line). The txt can become rather huge if you have a lot of data.
Have a read http://www.lansa.com/support/tips/t0220.htm