I am relatively new to SQlite3. I am currently using DB Browser. I have two very large .db files that I want to combine such that a all the tables (all different) all reside together in one .db (ie- I don't want to just attach them).
I am trying to use a simple .dump code in order to achieve this, but keep getting errors. I first define the path/directory and then:
for db in OctoberData, OctoberData2; { sqlite3 ${db}.db .dump | sqlite3 OctoberDataAll.db }
and get
Result: near "for": syntax error
At line 1:
for
Im sure it is something simple, but what am I doing wrong?
Thank you for your help!
Related
i was trying to load csv file from AWS s3 bucket with copy into command in one of the csv file throw error like
End of record reached while expected to parse column
'"RAW_PRODUCTS"["PACK_COUNT_UNITS":25]
and with the VALIDATION_MODE = RETURN_ALL_ERRORS it also give me 2 rows that have error i am not sure what error would be.
my concern is can we get specific error so that we can fix it in file.
You might try using the VALIDATE table function. https://docs.snowflake.com/en/sql-reference/functions/validate.html
Thanks Eda, i already reviewed above link but that did not work with sql query with copy into table from s3 bucket, so i create stage and place that csv file on stage and then try to run that validate command that give me same error row.
there is another way to identify error while executing copy into command you can add VALIDATION_MODE = RETURN_ALL_ERRORS you will get same result.
by the way i resolve error its due to /,, i remove / and it loaded successfully. / or /, is working as it was in other row but /,, did not work.
Is there a way for me to import csv files to cloud sql? I've tried to follow this tutoriaL but I didn't work: https://cloud.google.com/sql/docs/admin-api/v1beta4/instances/import#request-body
Is there any other way to accomplish this?
Thanks!
If you provide some more information regarding the fail message we might be able to tell you what you need to fix in order to complete the tutorial. However, You can do something a little ugly, but it works. You'll need to import pymysql.
So lets assume that the CSV data is in csv_file_lines list variable. Now we need to go over the lines and execute an insert to the tables, with the current line's data, in the following way:
db = pymysql.connect(host=<host>,port=<port>,user=<user>,password=<password>)
cursor = db.cursor()
for line in csv_file_lines:
cursor.execute('INSERT INTO TABLE....')
This is my first time trying to use Informix. I have around 160 tables to load, using pipe-delimited text files. We have an older series of batch files that a previous developer wrote to load Informix data, but they're not working with the new version of Informix (11.5) that I installed. I'm running it on a Windows 2003 server.
I've modified the batch file to execute the onpladm commands for one file, so this batch file looks like this:
onpladm create project dif31US-1-table-Load
onpladm create object -F diffdbagidaxsid.dev
onpladm create object -F diffdbagidaxsid.fmt
onpladm create object -F diffdbagidaxsid.map
onpladm create object -F diffdbagidaxsid.job
When I run this, it successfully creates the project and device array,
but I get an error creating the format. The only error I get is:
Create object DELIMITEDFORMAT diffile1fmt failed!
Invalid format!
The diffdbagidaxsid.fmt file is as follows:
BEGIN OBJECT DELIMITEDFORMAT diffile1fmt
PROJECT dif31US-1-table-Load
CHARACTERSET ASCII
RECORDSTART
RECORDEND
FIELDSTART
FIELDEND
FIELDSEPARATOR |
BEGIN SEQUENCE
FIELDNAME agid
FIELDTYPE Chars
END SEQUENCE
BEGIN SEQUENCE
FIELDNAME axsid
FIELDTYPE Chars
END SEQUENCE
END OBJECT
As you can see, it is only 2 columns. It originally had nothing following the CHARACTERSET. I've tried it with ASCII, and with the numeric code for ASCII, and still get the same error.
Is there any way to get a more verbose error message?
Also, can anyone recommend a decent (meaning active community) forum for Informix? I've tried the old comp.databases.informix forum, http://www.dbforums.com, the 'official' forum on IBM DeveloperWorks, and here of course. None have very much activity. We have to do this testing because we have customers (or maybe just 1 big one) who uses it, so we have to test our data and API against it.
Succinctly, I don't think there is a way to get much more information out of onpladm.
I just want to know if there could be any way by which we can read a value from an .xls file using a .bat file.
For eg:If i have an .xls named test.xls which is having two columns
namely 'EID' and then 'mail ID'.Now when we give the input to the .xls the EID name.it should extract the mail id which corresponds to the EID and echo the result out.
**EID** **MailID**
E22222 MynameisA#company.com
E33333 MynameisB#company.com
...
...
So by the above table,when i give the input to the xls file using my .bat file as E22222,it should read the corresponding mail ID as MynameisA#company.com and it should echo the value.
So i hope i am able to present my doubt.Please get back to me for more clarifications.
Thanks and regards
Maddy
There is no facility to do this directly with traditional .bat files. However, you might investigate PowerShell, which is designed to be able to do this sort of thing. PowerShell integrates well with existing Windows applications (such as Excel) and may provide the tools you need to do this easily.
A quick search turned up this example of reading Excel files from PowerShell.
You can't do this directly from a batch file. Furthermore, to manipulate use Excel files in scripting you need Excel to be installed.
What you can do is wrap the Excel-specific stuff in a VBScript and call that from your batch.
You can do it with Alacon - command-line utility for Alasql database.
It works with Node.js, so you need to install Node.js and then Alasql package:
To take data from Excel file you can use the following command:
> node alacon "SELECT VALUE [mail ID] FROM XLS('mydata.xls', {headers:true})
WHERE EID = ?" "E2222"
Fist parameter is a SQL-expresion, which read data from XLSX file with header and search data
for second parameter value: "E22222". The command returns mail ID value.
This will be hard (very close to impossible) in BAT, especially when using the original XLS file, but even after an export to CSV it will be much easier to use a script/programming language (Perl, C, whatever) to do this.
I have a select query producing a big output and I want to execute it in sqldeveloper, and get all the results into a file.
Sql-developer does not allow a result bigger than 5000 lines, and I have 100 000 lines to fetch...
I know i could use SQL+, but let's assume I want to do this in sqldeveloper.
Instead of using Run Script (F5), use Run Statement (Ctrl+Enter). Run Statement fetches 50 records at a time and displays them as you scroll through the results...but you can save the entire output to a file by right-clicking over the results and selecting Export Data -> csv/html/etc.
I'm a newbie SQLDeveloper user, so if there is a better way please let me know.
This question is really old, but posting this so it might help someone with a similar issue.
You can store your query in a query.sql file and and run it as a script. Here is a sample query.sql:
spool "C:\path\query_result.txt";
select * from my_table;
spool off;
In oracle sql developer you can just run this script like this and you should be able to get the result in your query_result.txt file.
#"C:\Path\to\script.sql"
Yes you can increase the size of the Worksheet by change the setting Tool-->Preferences - >Database -> Worksheet -> Max rows to print in a script(depends on you).
Mike G answer will work if you only want the output of a single statement.
However, if you want the output of a whole sql script with several statements, SQL*Plus reports, and some other output formats, you can use the spool command the same way as it is used in SQL*Plus.