Is there a way for me to import csv files to cloud sql? I've tried to follow this tutoriaL but I didn't work: https://cloud.google.com/sql/docs/admin-api/v1beta4/instances/import#request-body
Is there any other way to accomplish this?
Thanks!
If you provide some more information regarding the fail message we might be able to tell you what you need to fix in order to complete the tutorial. However, You can do something a little ugly, but it works. You'll need to import pymysql.
So lets assume that the CSV data is in csv_file_lines list variable. Now we need to go over the lines and execute an insert to the tables, with the current line's data, in the following way:
db = pymysql.connect(host=<host>,port=<port>,user=<user>,password=<password>)
cursor = db.cursor()
for line in csv_file_lines:
cursor.execute('INSERT INTO TABLE....')
Related
I am relatively new to SQlite3. I am currently using DB Browser. I have two very large .db files that I want to combine such that a all the tables (all different) all reside together in one .db (ie- I don't want to just attach them).
I am trying to use a simple .dump code in order to achieve this, but keep getting errors. I first define the path/directory and then:
for db in OctoberData, OctoberData2; { sqlite3 ${db}.db .dump | sqlite3 OctoberDataAll.db }
and get
Result: near "for": syntax error
At line 1:
for
Im sure it is something simple, but what am I doing wrong?
Thank you for your help!
I'm trying to import data into my MS SQL DB (as a flat file). However, there is a problem with one of the fields: it contains a line break within the data, which leads to the import wizard thinking it's the end-of-line, hence breaking each row into two. I've tried to import the data into excel as well (just to try it out), but it's the same behavior.
Does anyone know how to solve this? Any pre-import mechanism that might massage the data somehow?
(unfortunately, it's not practically possible for me to ask the source system to change the encoding)
//Eva-Lotta
Use to replace new line character in columns having values.
Replace(Replace(columnName,char(13),' '),char(10),' ')
Regards
I've managed to find a work-around! I start with splitting the files into chunks (as they are 3.8 GB in size ...), open them in UltraEdit, loop through them to join the 2 lines together, and import them into excel / my SQL DB. It's not neat, but it has solved my immediate problem ... but thanks for your engagement!
I need to import a list of files (txt) to SQL Server, but I would like to do this without reading them in R. Is there a way to do this?
I know that this works, but I would like to avoid the read.table part?
tabla<-read.table("Cuota_Ingreso_201612.txt", header=TRUE)
require (RODBC)
dbWar <- odbcDriverConnect('driver={SQL Server};server=XX;database=Warehouse;trusted_connection=true')
sqlSave(dbWar ,tablename='Cuota_Ingreso_201612',tabla, rownames=FALSE)
Thanks.
PS: I want to use R because is a big list of files and I want to write a loop to do it. Please don't answer using another program.
I wanted to how to read a file in my desk top using pg_read_file in PostgreSQL
pg_read_file(filename text [, offset bigint, length bigint])
my query
select pg_read_file('/root/desktop/new.txt' , 0 , 1000000);
error
ERROR: absolute path not allowed
UPDATE
pg_read_file can read the files only from the data directory path, if you would like to know your data directory path use:
SHOW data_directory;
I think that you can resolve you problem by looking to this post
If you're using psql you can use \lo_import to create a large object from a local file.
The pg_read_file tool only allows reads from server-side files.
To read the content of a file from PostgreSQL you can use this.
CREATE TABLE demo(t text);
COPY demo from '[FILENAME]';
SELECT * FROM demo;
Each text-line in a SQL-ROW. Useful for temporary transfers.
lo_import(file path) will generate an oid.This may solve your problem. you can import any type of file using this (even image)
I have a select query producing a big output and I want to execute it in sqldeveloper, and get all the results into a file.
Sql-developer does not allow a result bigger than 5000 lines, and I have 100 000 lines to fetch...
I know i could use SQL+, but let's assume I want to do this in sqldeveloper.
Instead of using Run Script (F5), use Run Statement (Ctrl+Enter). Run Statement fetches 50 records at a time and displays them as you scroll through the results...but you can save the entire output to a file by right-clicking over the results and selecting Export Data -> csv/html/etc.
I'm a newbie SQLDeveloper user, so if there is a better way please let me know.
This question is really old, but posting this so it might help someone with a similar issue.
You can store your query in a query.sql file and and run it as a script. Here is a sample query.sql:
spool "C:\path\query_result.txt";
select * from my_table;
spool off;
In oracle sql developer you can just run this script like this and you should be able to get the result in your query_result.txt file.
#"C:\Path\to\script.sql"
Yes you can increase the size of the Worksheet by change the setting Tool-->Preferences - >Database -> Worksheet -> Max rows to print in a script(depends on you).
Mike G answer will work if you only want the output of a single statement.
However, if you want the output of a whole sql script with several statements, SQL*Plus reports, and some other output formats, you can use the spool command the same way as it is used in SQL*Plus.