I have a DMP file which was exported with Oracle 10, with the legacy exp command ( not the expdp command ).
However, I do not know what is inside.
For the import, I want to extract information from the file.
I know, that for impdp there would be options the extract information (like the contained users or schemas) from the export file. Is there a similar way to use imp to get this information?
Use IMP with the SHOW parameter, such as
imp un/pw#db file=aldorado.dmp show=y
More info here
You can also display the contents of an export file without actually
performing an import. To do this, use the Import SHOW parameter.
Related
I created a dump file dumpfile.dmp in Oracle 12c for a schema say A from the source database, then I tried to import the dump file to several schemas say B, C, D on another database TESTDB with one command using the schema_remap option. The command looks like this:
impdp system/password#TESTDB directory=mydirectory dumpfile=dumpfile.dmp remap_schema=A:B,C,D remap_tablespace=TBS_A:TBS_B,TBS_A:TBS_C,TBS_A:TBS_D logfile=mylogfile.log.
I even put the command in .par file but I still get the same error.
It always come back with error "UDI-00014: invalid value for parameter, 'remap_schema'"
I will appreciate if anyone can tell me what I am doing wrongly?
You need to take a closer look at the syntax for REMAP_SCHEMA (and REMAP_TABLESPACE).
There is no provision for remapping one exported schema (or tablespace) to multiple destination schemas (or tablespaces).
https://docs.oracle.com/en/database/oracle/oracle-database/12.2/sutil/datapump-import-utility.html#GUID-5DA84A72-B71C-4491-9DD8-7075D9A4B04F
If your follow-up question is 'so how do I accomplish this purpose?' the answer is to run a separate import for each destination schema.
tried to import the dump file to several schemas say B, C, D
remap_schema=A:B,C,D
You can't do that; DataPump doesn't support that kind of remap. remap_schema must be a 1:1 relationship, as must remap_tablespace, and the source must be unique (i.e. you can only remap schema A once per import). Per the documentation:
Multiple REMAP_SCHEMA lines can be specified, but the source schema
must be different for each one.
You will have to run separate imports for each target schema.
impdp system/password#TESTDB directory=mydirectory dumpfile=dumpfile.dmp remap_schema=A:B remap_tablespace=TBS_A:TBS_B logfile=mylogfile.log
impdp system/password#TESTDB directory=mydirectory dumpfile=dumpfile.dmp remap_schema=A:C remap_tablespace=TBS_A:TBS_C logfile=mylogfile.log
impdp system/password#TESTDB directory=mydirectory dumpfile=dumpfile.dmp remap_schema=A:D remap_tablespace=TBS_A:TBS_D logfile=mylogfile.log
I want export some data from opentsdb,then import it into DolphinDB.
In opentsdb, the metrics are device_id,ssid, the tags are battery_level,battery_status,battery_temperature,bssid,cpu_avg_1min,cpu_avg_5min,cpu_avg_15min,mem_free,mem_used and rssi.
In DolphinDB , I create a table as bellow,
COLS_READINGS = `time`device_id`battery_level`battery_status`battery_temperature`bssid`cpu_avg_1min`cpu_avg_5min`cpu_avg_15min`mem_free`mem_used`rssi`ssid
TYPES_READINGS = `DATETIME`SYMBOL`INT`SYMBOL`DOUBLE`SYMBOL`DOUBLE`DOUBLE`DOUBLE`LONG`LONG`SHORT`SYMBOL
schema_readings = table(COLS_READINGS, TYPES_READINGS)
I find that the csv text file can import into DolphinDB, but I don't know how to export data to csv text file in Opentsdb. Is there a easy way to finish this work?
Assuming you're using an HBase backend, the easiest way would be to access that directly. The OpenTSDB schema describes in detail how to get the data you need.
The data is stored in one big table, but to save space, all metric names, tag keys and tag values are referenced using UIDs. These UIDs can be looked up in the UID table which stores that mapping in both directions.
You can write a small exporter in a language of your choice. The OpenTSDB code comes with an HBase client library, asynchbase, and has some tools to parse the raw data in its Internal class which can make it a bit easier.
Here is my use case:
I have two different dump files, from two different external sources, both exported with exp (so classical dumps): export1.dmp and export2.dmp (as far as I can see, they were both exported with exp version V11.02.00. But I don't know anything more about them.)
For each one I run the imp utility (on Oracle 12c) with the indexfile option, in order to generate an sql file containing the table and index create commands, like this:
(a) imp mytargetuser/password file=export1.dmp full=y indexfile=create1.sql
(b) imp mytargetuser/password file=export2.dmp full=y indexfile=create2.sql
In create1.sql every create statement gets generated with the table/index name prefixed by mytargetuser (which is what I want), like:
create table mytargetuser.table1 ...;
create index mytargetuser.index1 ...;
However, in create2.sql every create statement gets generated with the table/index name prefixed by someuser (which is probably a user in the original database, from where the dump was made):
create table someuser.table1 ...;
create index someuser.index1 ...;
Any idea why this difference in the output of the indexfile option? And if there is any way I can force imp to always behave like in the first case above (a): to use the user I run imp with as the schema prefix in the generated script (or to not prefix at all the table/index names with a schema name, that would be also good for me)? (Once again, I cannot influence in any way the how the dump is generated on the other end.)
The behaviour is indeed explained in the manual:
If you do not specify TOUSER, then Import will do the following:
Import objects into the FROMUSER schema if the export file is a full dump or a multischema, user-mode export dump file
Create objects in the importer's schema (regardless of the presence of or absence of the FROMUSER schema on import) if the export
file is a single-schema, user-mode export dump file created by an
unprivileged user
Based on the above one might surmise that export2.dmp "is a full dump or a multischema, user-mode export dump file", while export1.dmp "is a single-schema, user-mode export dump file created by an unprivileged user".
I am Sybase BO Developer.
Looking for possible option top copy data from Sybase IQ production and load to QA/UAT.
Need only subset of data(based on dates), not full table.
What are the possible options.
Thank You
Binu Varghese
Check this
http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter.dc00170.1540/doc/html/san1288042643642.html and the examples at http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter.dc00170.1540/doc/html/san1288042645377.html
The idea is that you set temp options for file name, directory etc, then run a query and the result set is sent to a file. Then you must write a load command in order to load this file. It seems a bit confusing at first but it is very fast
I am looking for the right tool to export specific rows (WHERE condition) of some oracle database tables. There is one column with CLOB Data which can be larger than 4000 characters, therefore exports as "INSERT INTO" statements do not work.
Using exp works but also exports the DDL, which gives errors when using imp as the Table is already existing.
Use the IGNORE=Y parameter when importing the dump file. This tells the import to ignore creation errors. Find out more.