I'm trying to copy the contents of a production DB to then use for local development. I've tried using sqlite's dump and then doing cat dump.sql | sqlite3 db.sqlite3, but this just prints out loads of errors like:
Error: near line 3: table "django_migrations" already exists
Error: near line 4: UNIQUE constraint failed: django_migrations.id
Error: near line 5: UNIQUE constraint failed: django_migrations.id
Error: near line 6: UNIQUE constraint failed: django_migrations.id
Error: near line 7: UNIQUE constraint failed: django_migrations.id
Error: near line 8: UNIQUE constraint failed: django_migrations.id
Error: near line 9: UNIQUE constraint failed: django_migrations.id
I've also tried using Django's own dump/load commands, but when I do./manage.py loaddata db.json I get loads of errors:
Traceback (most recent call last):
File "/Users/jack/dev/web_design/kaoru_wada/venv/lib/python3.5/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
File "/Users/jack/dev/web_design/kaoru_wada/venv/lib/python3.5/site-packages/django/db/backends/sqlite3/base.py", line 337, in execute
return Database.Cursor.execute(self, query, params)
sqlite3.IntegrityError: UNIQUE constraint failed: django_content_type.app_label, django_content_type.model
I get these errors even if I delete the existing local DB, run migrations, and then try and import the dbdumps into the fresh local DB.
Any ideas on how to copy the DB contents and successfully import it?
Your .sql dump file contains the information for constructing database tables as well as the data to populate them, so there's no need to run migrations as a separate step - that will just duplicate part of the process (and throw errors as it tries to create tables that already exist).
Simply running the cat dump.sql | sqlite3 db.sqlite3 step without creating the db.sqlite3 file beforehand will give you a fully populated database in the same state as if you'd run migrations.
Related
i was trying to load csv file from AWS s3 bucket with copy into command in one of the csv file throw error like
End of record reached while expected to parse column
'"RAW_PRODUCTS"["PACK_COUNT_UNITS":25]
and with the VALIDATION_MODE = RETURN_ALL_ERRORS it also give me 2 rows that have error i am not sure what error would be.
my concern is can we get specific error so that we can fix it in file.
You might try using the VALIDATE table function. https://docs.snowflake.com/en/sql-reference/functions/validate.html
Thanks Eda, i already reviewed above link but that did not work with sql query with copy into table from s3 bucket, so i create stage and place that csv file on stage and then try to run that validate command that give me same error row.
there is another way to identify error while executing copy into command you can add VALIDATION_MODE = RETURN_ALL_ERRORS you will get same result.
by the way i resolve error its due to /,, i remove / and it loaded successfully. / or /, is working as it was in other row but /,, did not work.
I have a dump of a huge database (around 30gb) that I'm trying to import to pg admin 3.
There are around 600 tables in the database - and it all works fine until the last ~50 tables are reached...
I'm importing the dump via the cmd prompt
Here's the error message I get:
ERROR: invalid input syntax for timestamp: »S«
KONTEXT: COPY XXXX, Line 100970, Spalte odb_created_at: »S«
I really have no idea how to handle this error since I cannot open the script in any editor
BR
I am trying to call Liquibase Stored Procedure Through Tag
it gives me error as sql code
DB2 SQL Error: SQLCODE=-440, SQLSTATE=42884
I Am Tring to call as follows : :
<sql> CALL TestProcedure('abc','xyz') </sql>
it executed well from outside command line Client and not using liquibase
i also tried calling using schema name no luck Open to suggestions
SQL440N is routine not found. The error message should give the text of the rountine name. Does it match what you have in your database. Is is a case sensitive problem? (i.e. try in upper case?)
SQL0440N No authorized routine named "<routine-name>" of type
"<routine-type>" having compatible arguments was found.
I'm stuck with a SSIS package that I created for importing xlsx files to a database. Since some files have data with more tan 255 characters, I setted that column to DT_NTEXT. If I just leave a xlsx file that I know that has this long data, the package works fine with no erros. But, if I leave all the files that need to be imported in the import folder, I get next erros:
[VENTA_IMS_EXCEL [1]] Error: SSIS Error Code DTS_E_OLEDBERROR.
An OLE DB error has occurred. Error code: 0x80040E21.
[VENTA_IMS_EXCEL [1]] Error: Failed to retrieve long data for column "F17".
[VENTA_IMS_EXCEL [1]] Error: There was an error with output column
"SubFamilia" (16693) on output "Excel Source Output" (9).
The column status returned was: "DBSTATUS_UNAVAILABLE".
[VENTA_IMS_EXCEL [1]] Error: SSIS Error Code
DTS_E_INDUCEDTRANSFORMFAILUREONERROR. The "output column "SubFamilia"
(16693)" failed because error code 0xC0209071 occurred, and the error row disposition on
"output column "SubFamilia" (16693)" specifies failure on error. An error occurred on the
specified object of the specified component. There may be error messages posted
before this with more information about the failure.
[SSIS.Pipeline] Error: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED.
The PrimeOutput method on component "VENTA_IMS_EXCEL" (1) returned error code 0xC0209029.
The component returned a failure code when the pipeline engine called PrimeOutput().
The meaning of the failure code is defined by the component, but the error is fatal
and the pipeline stopped executing. There may be error messages posted before this
with more information about the failure.
My guessing is that the problem is that it evaluates each file for the kind of data to work with, and in the cases that there is data with less tan 255 characters, it fails.
Can anyone help me with this? How can I solve this? So it can loop and import all files with no problems.
This is a common issue with excel files. The excel driver infers the datatype for each column based on the first 8 rows. Review what datatype is your datasource assigning to your column and then confirm that all values conform to this datatype.
Review this blog post: https://www.concentra.co.uk/blog/why-ssis-always-gets-excel-data-types-wrong-and-how-to-fix-it
Here is my XML snippet:
<jdbc:initialize-database data-source="dataSource">
<jdbc:script location="/WEB-INF/sqlscripts/age.sql" encoding="UTF-8" />
</jdbc:initialize-database>
When starting my Spring web application via Jetty, I got the following error:
Context initialization failed
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.jdbc.datasource.init.DataSourceInitializer#0': Invocation of init method failed; nested exception is org.springframework.dao.DataAccessResourceFailureException: Failed to execute database script; nested exception is org.springframework.jdbc.datasource.init.ScriptStatementFailedException: Failed to execute SQL script statement at line 1 of resource ServletContext resource [/WEB-INF/sqlscripts/age.sql]: I?N?S?E?R?T? ?A?g?e?G?r?o?u?p? ?(?d?e?f?a?u?l?t?O?p?t?i?o?n?,? ?d?e?s?c?r?i?p?t?i?o?n?,? ?d?i?s?p?l?a?y?S?o?r?t?,? ?n?a?m?e?,? ?c?o?d?e?,? ?l?o?c?a?l?e?)? ?V?A?L?U?E?S? ?(?1?,? ?N?U?L?L?,? ?1?,? ?N?'?1?6?-?2?0?'?,? ?N?'?1?6?-?2?0?'?,? ?N?'?e?n?_?U?S?'?)?
.....
com.microsoft.sqlserver.jdbc.SQLServerException: Incorrect syntax near 'S'.
....
I don't know whether these question marks in the error message come from. The SQL statement in the age.sql has one line:
INSERT age(defaultOption, description, displaySort, name, code, locale) VALUES (1, NULL, 1, N'16-20', N'16-20', N'en_US');
I was able to run the same SQL statement successfully within MS SQL server manually. My SQL statement can have foreign characters.
I am using Spring 3.2.
What went wrong?
Thanks!
The file (example age.sql) having SQL inserts has encoding UTF-8 because it has characters of different languages. I changed it to UTF-8 No BOM. Then it worked.
Hope this helps someone else.