I'm using Navision Dynamics 5.0 and need to export all the financial data into my datawarehouse on a regular basis (1 time daily). And therefore I don't want to use csv-files as exporting method.
Which other methods are normally used? This must be a regular task for all companies who uses Navision Dynamics, and needs to get the data out in an automatic manner.
I'm of course also worried about locking the tables when exporting the data.
I can think of these methods so far:
1) direct ODBC access to all the underlying tables
2) Creation of a read only indexed view (mateterialized view) on top of the Navision tables, which holds a copy of Navision data and then can be accessed by the datawarehouse. (NB: An indexed view is a view that has been materialized. This means it has been computed and stored.)
3) ?
4) ?
Let me hear you typical ways of doing the export.
PS: I have heard that there is no webservice export option for Navision Dynamics 5.0, only in the newest version NAV2009. So I cannot use a webservice method.
I found this document describing some of the various export methods:
http://nav.dk/files/Nav_IntegrationGuide1.2.pdf
So to continue my list, here are some more options:
3) Seems like a solution could be using Navisions own ODBC driver called NAV ODBC Driver (NODBC)
4) Another solution could be using the Navision in-build Dataports for exporting data. However Dataports can only produce csv-files.
You could also use XmlPorts, if an XML file is preferable to csv. Both DataPorts and XmlPorts allow you to aggregate data: for example you can export sales headers along with the lines for each header, if this is useful in your scenario.
You can also use filters, so you can export incremental updates to the warehouse daily. If you are concerned about holding locks for a long time, you could also try using filters to export the data in chunks.
I believe most solutions use the NAS (Navision Application Server) to schedule running DataPorts or XmlPorts, so the export is driven by NAV.
As a further alternative to using NODBC, you could also explore using CFront, which is a C/.NET API giving relatively low level access to the data including the facility to evaluate flow fields etc. NODBC and CFront are really the only options if you want to call into NAV (rather than using the NAS to pushing data out as csv/xml).
I haven't compared the relative performance of each method, but suspect that NODBC and CFront would be fastest for large volumes of data.
NODBC, CFront and the NAS all require specific granules in your license - so you might want to check which, if any, you are currently licensed to use.
Related
The firm I work in has a lot of data sources entering the firm database using the Informatica ETL tool, stored in maplets and other data models (sorry If I'm not using the exact terminology).
The problem is that all the business logic is stored in the 'graphical interface' and nowhere else - Every time I want to see what field goes into the target field I have to trace the inputs through the maplet and that takes a very long time.
The Question is: Is there a tool that can takes all the relationships in the Informatica maplet and somehow export them to a excel table (so I can see it all without tracing)? that way I could try to make proper documentation....
Thanks in Advance.
It's possible to export mappings or whole workflows to XML. Next, you can use this tool - it will create tables with source to target dependency for every mapping.
Keep in mind it will only map input to output, it won't extract the full logic and transformations done along the way - that would've been to complex for simple visualization.
Informatica supports exporting mapping information to Excel - just search the documentation which tells you how to do it.
However, for anything other than the simplest of mappings, what ends up in Excel is not that easy to understand. If your Informatica installation supports it, then using the lineage capabilities is a much better bet.
Some Background:
Somewhere around Oracle 10, which was about a decade ago (give or take), Oracle added a new method of exporting and importing databases called the Oracle Data Pump. Aside from the silly name, the functionality works mostly the same as the Original Export and Import Utility.
The link to the Original Utility contains the following warning text, which appears to be at least somewhat self-contradictory:
Original export is desupported for general use as of Oracle Database
11g. The only supported use of Original Export in 11g is backward
migration of XMLType data to a database version 10g release 2 (10.2)
or earlier. Therefore, Oracle recommends that you use the new Data
Pump Export and Import utilities, except in the following situations
which require Original Export and Import:
You want to import files that were created using the original Export utility (exp).
You want to export files that will be imported using the original Import utility (imp). An example of this would be if you wanted to
export data from Oracle Database 10g and then import it into an
earlier database release.
As far as I can tell, the only reason Exp and Imp would not operate correctly is if the database uses features introduced in 11g onward. Otherwise, it appears that the old Exp and Imp commands should work just fine, and from the above, they do appear to be officially supported.
One of the key differences with "Data Pump" vs. "Original" export - and this is important for my application - is that the data pump operates server-side only, meaning that a user will require at least some degree of permissions to the server to access the file produced by the export. At best, this is inconvenient, and at worst, this results in a file that cannot be accessed by anyone other than the dba.
Issue:
When we upgraded to 12c from 11g, we had an issue using the original export utility. It would run successfully up to the point of exporting the triggers, then produce an error as follows:
EXP:00056 ORACLE error 4063 encountered
ORA-06508: package body XDB.DBMS_XDBUTIL_INT has errors
ORA-06508: PL/SQL: could not find program unit being called:
"XDB.DBMS_XDBUTIL_INT"
The Question:
This issue came up at least a dozen times in different contexts, and we are sort of playing whack-a-mole with it. The most recent attempt at solving it involves recompiling every package on the server, which takes about a half hour.
Why does this export issue keep coming up?
Are Exp and Imp actually, officially, deprecated, such that we are no longer able to use them reliably?
Are there any other straightforward ways to get a client-side export of the database?
Why does this export issue keep coming up?
Since the problem is intermittent I would guess it's caused by deferred segment creation. Since 11g, tables and partitions can be configured to not allocate any space until there is some data. (This can save significant space for tables with many empty partitions.) But Exp doesn't understand this and assumes every table must have a segment. Which means some tables and related features may appear to "randomly" cause problems depending on whether or not they've been populated or truncated lately.
You can find those tables with this query:
select * from dba_tables where segment_created = 'NO';
And then force them to have a segment with this statement:
alter table table_name allocate extent;
Are Exp and Imp actually, officially, deprecated, such that we are no longer able to use them reliably?
This is debatable but I'd say yes, the original Exp and Imp are truly "deprecated" now. It does feel like Oracle plays a lot of games with deprecating software. For example, deprecating the free change data capture for the super expensive Goldengate, or deprecating non-container architecture when almost nobody wants to use their expensive containers. But it's been a long time and Exp and Imp don't cut it anymore.
Are there any other straightforward ways to get a client-side export of the database?
Try OCP, Oracle CoPy. You still need to generate the export on the server. But OCP allows you to download files from the server filesystem to the client filesystem without any server filesystem permissions. It's still not as straight-forward as it should be but at least you don't have to give everyone privileges on the server filesystem.
During an upgrade process from 2009 to 2016 I'm trying to remove objects relating to an old discontinued product. The objects are not within the range of or license and consists of both Forms, Tables and Reports. When deleting I'm faced with the well known error:
"You do not have permission to delete the '[object name]' Table."
I've tried with my developers license and the customers license with no luck. Since the product is no longer existing there is no use keeping these objects around and I need them gone for the upgrade process.
What is the best approach or technique when deleting objects that's not in the license?
UPDATE: How this issue was resolved?
I got in contact with the product owner and explained my problem. They sent me a neat PowerShell script to run. This worked like a charm. Reading through the script I can see that it's using the SQL cmdlets to select and delete relevant data from the following SQL tables:
Objects, Object Metadata, Object Metadata Snapshot, Object Tracking, Object Translation, Permission.
This was the preferred method of the product owner who used to develop this product. It should be applicable to all NAV objects. I have not yet successfully tried one of the answers below (more tries to come). Hopefully this new information will provide someone with enough knowledge to provide a good answer.
The way which was successfully used by several people but for sure cannot be recommended for production system is to simply delete these objects via SQL from Object and supplemental tables. In case of tables, you would need to manually delete the SQL table itself as well as its VSIFT views.
A bit more better (probably) way is to change the number of the object via SQL and then delete the object via NAV.
The best way is to use the functionality of "killer objects" - which allow to delete objects via FOB import:
http://navisionary.com/2011/11/how-to-delete-bsolete-dynamics-nav-objects/
If you find the partner who can provide you with such killer objects (they need to have a license to create objects in needed range), it solves you problem in a "clean" way.
If not, you may want to consider creating empty objects in 50000 range in some test DB, changing their number to obsolete range via SQL, exporting them as FOB, and then importing them to your target DB with "Delete" option.
Create new empty database, export only needed objects from old database, import them to new database.
In Nav 2016 application database can be separated from data containing database so (I assume) you could just unmount it from database with old objects and mount it to new application database. Not sure tbh.
It is due to the range of the license, for example your development license has a range of tables 7.000.000 - 7.000.200. If you want to delete a table with ID 20.000.000 you have that error.
The best solution is when you do the updrage do not you consider these objects you need to delete. Exports all objects except the objects you want to delete.
Does anyone have an elegant suggestion for how to get the contents of an Excel spreadsheet into SQL Server via a web form? I need to allow our clients to upload modest amounts of structured data, and I need that data to ultimately reside in a sql table. I really can't expect the clientele to produce anything but an Excel file, but I could require that it be an xlsx.
The web app is written in Coldfusion; it doesn't need to be able to handle huge numbers of simultaneous requests, but I don't want to consider some sort of server-side batch job processing or shunt the user to an asp.net page (which is what we are doing now).
Any recommendations (or examples of how others are successfully doing this) would be appreciated. Due to the sensitivity of the data, we really can't do anything to compromise the security of the web or sql servers.
If you are using CF9, then you could easily use the cfspreadsheet tag too. I mention this one specifically because Shawn's link did not (presumably due to its being relatively new on the CF scene). Here's the livedoc link: http://help.adobe.com/en_US/ColdFusion/9.0/CFMLRef/WSc3ff6d0ea77859461172e0811cbec17cba-7f87.html
For full use, I would create a web form with a standard file upload field. On the backend handling the form submission, get a copy of the file with
<cffile action="upload" destination="uploaded.xls".....>
Then use:
<cfspreadsheet action="read" query="myExcelData" src="uploaded.xls" ...>
At which point, your spreadsheet content will be available as a query object. You can then loop over this query, running insert queries into your sql server each time you loop. That should do it.
Here are the most notable options to help point you in the right direction; choose what you are most comfortable with (Source: Charlie Arehart).
CFXL
JXLS
CFX_Excel
My personal recommendation is to go the CFX_Excel route. Although a commercial product, it will grant you the most functionality/flexibility of the options listed.
I have a product based on Intersystems Caché database, I can see no classes, no schemas, no tables, only globals. Is there any smart way to export data from these globals and get "human-readable structure"?
First question is... what version of Caché?
Second question is... what tools do you have access to? Terminal, Studio, Management Portal??
If the data is in tables/classes, you should be able to access it via ODBC, at least.
If there aren't any tables/classes, the data is probably in Globals.
If the data is in Globals (persistent sparse array storage) they can look a bit wierd if you aren't used to the common patterns.
Even if it is in Globals, it may be possible to define classes with custom mapped storage to make them appear in a table-like way via SQL.
Cache is EXTREEMELY flexible, but can be a steep learning curve. :-(
Globals in Intersystems Cache is schemaless type of storage, so the best "human-readable" format you can have is that one in System Management Portal.
Other options are:
* zw command in terminal
* d ^%G command in terminal
Are you able to view the Cache SMP or connect to the database using Cache Studio? I would think you'd find code somewhere in there (at least a bunch of routines if they're not using classes). Using the SMP to browse the globals is a good way to get familiar with the datasets they contain. From a terminal session, you can use the zw command to take a look at global node contents:
USER> zw ^GlobalName
http://docs.intersystems.com/cache20082/csp/docbook/DocBook.UI.Page.cls?KEY=RCOS_czwrite
Can you give a little more information about your situation?
Depending on the structure of your globals you could create classes for them and edit the storagemapping to point to them. Based on that you then could go ahead and create reports/(zen/csp)webpages to display the contents. However, depending on the complexity of your data this could take you anywhere between hours and months :/
My experience is to use Navicat Tool and export database Caché into a MySQL schema or Postgres to understand DB Model, using import Tool via ODBC.
ODBC works with cache. You can use the ODBC connection to export the data to another strucutre such as a set of free tables or text files.
You can use a system utility called D ^%GO which is Global Output. You specify the global(s) and the file you want them exported to. There’s also a ^%GI for global imports from this file