On my Mac I’ve installed an instance of MS-SQL server in a Docker container. I’m able to connect to it using Azure Data Studio and can create tables, write queries, etc. without any problem. I’m now trying seed a table doing a bulk insert from an external csv file. I’ve copied the files to the Docker container as described in this post: Link
When I try to run the bulk insert, I get the following error:
Cannot bulk load because the file "/CountryErrors.txt" could not be opened. Operating system error code 5(Access is denied.)
I’m not very familiar with Linux and the CLI, but I’m guessing the permissions are incorrect. I’m not sure how to or what to change them to?
-rw-r--r-- 1 501 dialout 0 Jul 18 23:51 CountryErrors.txt
-rw-rw-r-- 1 501 dialout 478 Sep 7 2019 CountryFormat.fmt
-rw-rw-rw- 1 501 dialout 11816 Sep 7 2019 CountrySeed.csv
Relevant SQL code:
BULK INSERT [dbo].[#CountryTEMP]
FROM '/CountrySeed.csv'
WITH
(
FIRSTROW = 2,
FORMATFILE = '/CountryFormat.fmt',
ERRORFILE = '/CountryErrors.txt'
)
Related
I am using OLE DB source to import data from a Postgres database to SQL Server database using OLE DB Destination. When I execute the package, it is running successfully.
Eventhough it is saying 'N' number of records are written to OLE DB destination, I couldn't find all records always in PostgreSQL DB destination table. There are no error records.
Like for example I query my Employee table in Postgres :
EmpId Name PositionId
10 Bob 1
25 Alex 2
54 Mary 22
When I try to visualize data in the OLE DB Source with a connection manager pointing to Postgres, I only find :
EmpId Name PositionId
10 Bob 1
54 Mary 22
I can't find the record :
EmpId Name PositionId
25 Alex 2
I ended up using ODBC Postgresql Connection :
I configure a PostgreSQL connection in the ODBC Administration Tool.
I chose the correct version acc(32/64 bit) and opened the ODBC
Administration Toollike below :
I chose the Add button:
I selected the PostgreSQL Unicode driver and Clicked Finish. It will open
up the following window. I fill the fields according to the database
related properties that I have.
There's a SQL Server instance MSSQLSERVER running on local host in windows 7. I realized that its commit is much larger than its working set. Here’s a comparison between my local instance and another instance MS_MSBI_SSDS running on Windows Server 2008R2.
Local SQL Server
Image PID Hard Faults/sec Commit(KB) Working Set (KB) Sharable(KB)
sqlservr.exe 2380 0 45 615 948 61 992 17 784
Remote SQL Server
Image PID Hard Faults/sec Commit(KB) Working Set (KB) Sharable(KB) Private(KB)
sqlservr.exe 1964 1 6 464 988 5 496 884 40 608 5 456 636
The large amount of commit makes the local machine almost unusable. The commit charge is at 100% when MSSQLSERVER launched. Please notice that there isn’t any particular process running on the local SQL Server. And it has 2 databases (8GB), copied from the remote one.
My questions are
Why the local instance has a large commit when it has only a small working set?
Can I find what have been actualy committed ?
How to decrease its commit charge ?
Might the problem come from McAfee ? I don't have right to modify it due to company policy. What can I do ? Here's a relative post SQLSERVR.EXE High Commit Usage causing a low virtual memory condition.
I have developed an Oracle pro*C/C++ library that provides an API to other applications to read data from database tables and insert/update data in the tables. Recently a customer reported that the API did not return any error when underlying file systems (ASM) that hold tables spaces went down for a few days! This must be due to caching data by Oracle instance.
My question is this: Can we make Oracle return error, without affecting its caching scheme, for any read/write access to a table immediately after its table space is found to be not accessible due to corruption or any disk related errors?
I did a small experiment to simulate the customer's problem. I have created a new directory on my Linux system. Created a table space in it and created a table to use the table space. Inserted a few rows using sqlplus and fetched rows from the table and verified results. They are OK. Then I renamed table space file to some other file to simulate missing/corruption of the table space. Oracle still returned rows even though physical table space was missing. I could even insert a few rows successfully and commit went through without any error(probably this may fail for large number of inserts). However, I could not shut down the database instance. Once I aborted the instance, start up failed with an error saying the table space could not be identified/locked. After that sqlplus could not connect to the database. But my requirement is to get an error for any DML operation if the table space file found to be missing or corrupted.
Following is my experiment:
Created a directory for a new tablespace
[root#mvsLTOraLin u01]# mkdir /orats
[root#mvsLTOraLin u01]# chown oracle /orats
[root#mvsLTOraLin u01]# chgrp oinstall /orats
Created a table space using sysdba user as following:
SQL> create tablespace orats datafile '/orats/ots1.dbf' size 5M;
Tablespace created.
[root#mvsLTOraLin orats]# pwd
/orats
[root#mvsLTOraLin orats]# ls -l
total 5128
-rw-r-----. 1 oracle oinstall 5251072 Jul 11 17:53 ots1.dbf
Created a table in the table space using fis_admin user and inserted 3 rows:
-bash-4.1$ sqlplus fis_admin/fis_admin#fisdb
SQL*Plus: Release 11.2.0.1.0 Production on Sat Jul 11 17:56:50 2015
Copyright (c) 1982, 2009, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> create table tbl1(c1 int not null primary key, c2 varchar2(100)) tablespace orats;
Table created.
SQL> insert into tbl1 values(1, 'one');
1 row created.
SQL> insert into tbl1 values(2, 'two');
1 row created.
SQL> insert into tbl1 values(3, 'three');
1 row created.
SQL> select * from tbl1;
C1
----------
C2
--------------------------------------------------------------------------------
1
one
2
two
3
three
Renamed table space file from ots1.dbf to gone.dbf to simulate corruption of the table space.
[root#mvsLTOraLin orats]# date
Sat Jul 11 18:03:00 IST 2015
[root#mvsLTOraLin orats]# mv ots1.dbf gone.dbf
[root#mvsLTOraLin orats]# ls -l
total 5128
-rw-r-----. 1 oracle oinstall 5251072 Jul 11 17:53 gone.dbf
Selected data from sqlplus which is still connected
SQL> select c1, current_timestamp from tbl1;
C1
----------
CURRENT_TIMESTAMP
---------------------------------------------------------------------------
1
11-JUL-15 06.05.20.525276 PM +05:30
2
11-JUL-15 06.05.20.525276 PM +05:30
3
11-JUL-15 06.05.20.525276 PM +05:30
Retrieved data even though table-space disappeared.
This shows Oracle is returning from its cache.
Restarted sqlplus and queried. Still server did not realise that its table space is missing!
SQL> quit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
-bash-4.1$
-bash-4.1$ sqlplus fis_admin/fis_admin#fisdb
SQL*Plus: Release 11.2.0.1.0 Production on Sat Jul 11 18:07:24 2015
Copyright (c) 1982, 2009, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> select c1, current_timestamp from tbl1;
C1
----------
CURRENT_TIMESTAMP
---------------------------------------------------------------------------
1
11-JUL-15 06.07.42.014083 PM +05:30
2
11-JUL-15 06.07.42.014083 PM +05:30
3
11-JUL-15 06.07.42.014083 PM +05:30
Tried to shutdown the instance
-bash-4.1$ sqlplus / as sysdba
SQL*Plus: Release 11.2.0.1.0 Production on Sat Jul 11 18:08:42 2015
Copyright (c) 1982, 2009, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> shutdown
ORA-01116: error in opening database file 5
ORA-01110: data file 5: '/orats/ots1.dbf'
ORA-27041: unable to open file
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
SQL>
It failed as expected.
Tried to insert a row.
SQL> insert into tbl1 values(4, 'four');
1 row created.
Surprise! The insert was successful. It must may fail for many inserts.
Forced shutdown
SQL> shutdown
ORA-01116: error in opening database file 5
ORA-01110: data file 5: '/orats/ots1.dbf'
ORA-27041: unable to open file
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
SQL> shutdown immediate
ORA-01116: error in opening database file 5
ORA-01110: data file 5: '/orats/ots1.dbf'
ORA-27041: unable to open file
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
SQL> shutdown abort
ORACLE instance shut down.
SQL> quit
Restarted instance
SQL> startup;
ORACLE instance started.
Total System Global Area 1068937216 bytes
Fixed Size 2220200 bytes
Variable Size 742395736 bytes
Database Buffers 318767104 bytes
Redo Buffers 5554176 bytes
Database mounted.
ORA-01157: cannot identify/lock data file 5 - see DBWR trace file
ORA-01110: data file 5: '/orats/ots1.dbf'
Error is shown as expected.
Tried access to tbl1:
SQL> select c1, current_timestamp from tbl1;
select c1, current_timestamp from tbl1
*
ERROR at line 1:
ORA-01219: database not open: queries allowed on fixed tables/views only
Failed as expected.
So, is there a way to make Oracle return error on next access to the table for reading or writing even though data may be returned from cache?
i have an application which creates database schema's (Oracle 10g) for the users. The access to these schema's expires after a certain time. These schema's can be as large as 2GB in size. The actual operational data for the application is comparatively less.
To keep the database size low, what would be the best approach to archive this database schema's considering that these can be restored when required to be accessed by the user.
I am thinking if the following approach:
Convert the Schema in .csv files for each table and then compress the files (zip). Using csv can be an advantage considering its easy to convert csv to/from DB tables.
Please let me know if there is any better approach to do the same. The main aim here is to save the operational DB space.
Use Data Pump Export and Data Pump Import instead of building a custom tool. Exporting and importing data and metadata is not a trivial task. Data Pump was built for situations like this, it is already including with Oracle, and it has many advanced features.
Here's a very simple example of archiving a schema using data pump.
Create a directory to hold the export. This is only required once per database.
SQL> create directory export_directory as 'C:\test';
Directory created.
Create a test schema and sample data.
SQL> create user test_user identified by test_user;
User created.
SQL> alter user test_user quota unlimited on users;
User altered.
SQL> create table test_user.table1 as select 1 a from dual;
Table created.
Export Data Pump.
C:\test>expdp jheller#orcl12 directory=export_directory dumpfile=test_user.dmp schemas=test_user
Export: Release 12.1.0.1.0 - Production on Thu Jun 12 22:33:25 2014
Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved.
Password:
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "JHELLER"."SYS_EXPORT_SCHEMA_01": jheller/********#orcl12 directory=export_directory dumpfile=test_user.dmp schemas=test_user
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
. . exported "TEST_USER"."TABLE1" 5.031 KB 1 rows
Master table "JHELLER"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for JHELLER.SYS_EXPORT_SCHEMA_01 is:
C:\TEST\TEST_USER.DMP
Job "JHELLER"."SYS_EXPORT_SCHEMA_01" successfully completed at Thu Jun 12 22:34:35 2014 elapsed 0 00:00:56
Compress the file.
There is a data pump option to compress the data but it requires the Advanced Compression option. Instead of paying thousands of dollars per core I recommend downloading
one of a thousand free software programs that have been compressing data for decades.
C:\test>zip test_user.zip test_user.dmp
adding: test_user.dmp (172 bytes security) (deflated 90%)
C:\test>dir
Volume in drive C is OS
Volume Serial Number is 660C-91D8
Directory of C:\test
06/12/2014 10:37 PM <DIR> .
06/12/2014 10:37 PM <DIR> ..
06/12/2014 10:34 PM 1,435 export.log
06/12/2014 10:34 PM 212,992 TEST_USER.DMP
06/12/2014 10:37 PM 21,862 test_user.zip
3 File(s) 236,289 bytes
2 Dir(s) 689,950,937,088 bytes free
Drop the user.
SQL> drop user test_user cascade;
User dropped.
SQL> select count(*) from test_user.table1;
select count(*) from test_user.table1
*
ERROR at line 1:
ORA-00942: table or view does not exist
Import the user.
C:\test>impdp jheller#orcl12 directory=export_directory dumpfile=test_user.dmp
Import: Release 12.1.0.1.0 - Production on Thu Jun 12 22:41:52 2014
Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved.
Password:
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Master table "JHELLER"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "JHELLER"."SYS_IMPORT_FULL_01": jheller/********#orcl12 directory=export_directory dumpfile=test_user.dmp
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "TEST_USER"."TABLE1" 5.031 KB 1 rows
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Job "JHELLER"."SYS_IMPORT_FULL_01" successfully completed at Thu Jun 12 22:42:18 2014 elapsed 0 00:00:19
Check the data.
SQL> select count(*) from test_user.table1;
COUNT(*)
----------
1
Good morning my friends:
I used this piece of code in SQL Server to import or read a Text file. I am running a 64bit Machine with Office 2013 Professional (64bit). All works well, that's because I also have the Microsoft Access Database Engine 2010 64.
select *
from openrowset('MSDASQL'
,'Driver={Microsoft Access Text Driver (*.txt, *.csv)}'
,'select * from C:\MY_FOLDER\Databases\MY_FILES\mytext_file.txt')
The problem is, I started working on another computer which has Office 32, however the Computer is a 64bit machine as-well. Now this code wont work because I cannot install the Microsoft Access Database Engine 2010 64 with 32 bit office software installed. I need to first uninstall the 32 bit and re-install as 64bit.
I do not have the Office Software to remove/install 32 office.
Are there any other options?
I tried creating a format file but my text file is unpredictable as it is third party and the format always breaks, can't seem to find where the error is. I do not control the quality of this file.
I have also tried most of what is on this page but no luck:
http://bradsruminations.blogspot.in/2011/01/so-you-want-to-read-csv-files-huh.html
This is an update:
I used BULK INSERT: But I am getting double quotes in the data inserted, I am also not getting NULLS, instead double quotes.
Code:
USE myDatabase;
GO
BULK INSERT my_schema.my_table
FROM 'C:\MY_FOLDER\Databases\MY_FILES\mytext_file.txt'
WITH (
DATAFILETYPE = 'CHAR',
FIELDTERMINATOR = ',',
ROWTERMINATOR = '0x0a',
FIRSTROW=2,
KEEPNULLS
);
GO