How to use off line/cache database and synchronise with online server - database

I want to use offline and locale data base and when network is available, send data to the server. What are solutions ?
Server is linux running postgresql 9.5 or higher. Local and offline data base is Libreoffice 6.2.7 or OpenOffice for WINDOWS7 and WIndows10 64bits
I think about sql code to do the task but I never found how to deal with two database in an sql code.
It is like INSERT ... TO ... database.table ...

Related

How to extract Oracle 3.x data from really old server

tl;dr - How can I extract database information from an Oracle 3.x database server when the tools today don't support it?
I have a customer who neglected upgrades across the board (Win Server 2003, Oracle 3x, etc..yes, I know this is wrong across the board but It's my job to clean it up) and I need to extract the database information from the databases on this server for importing into something more modern.
I've checked the Oracle and non oracle db GUI applications and they oldest they go back is version 7. I've looked on Google and here and I have yet to find some sort of solution for data extraction. Do I need to ping Oracle for a version of their Gui application that will work with Version 3? Is there a code solution? I'm stumped.
That did the trick #littlefoot I also looked at the following:
SQuirrel
http://squirrel-sql.sourceforge.net/index.php?page=faq
SQLTools
http://www.sqltools.net/

Connect from multiple applications to one firebird database via embedded dll

I am relatively new to database programming. I use firebird 2.5 with IBPP. I have at least two applications using the sampe firebird database. I want to connect with the embedded variant (fbembedded.dll, icudt30.dll, icuc30.dll), since it will be a host application on customer PCs. I wrote a simple test application reading data from the database and started this application three times at the same time. Everything worked.
But now I am not shure if this works always and if this works stable without the danger to corrupt data. Because when I have a connection with the database with the viewer ibexpert my test application cannot connect to the database. Additionally, the documantation sais (firebirdEmbedded):
You can have multiple embedded servers running at the same time, and
you can have multiple apps connecting to the same embedded server.
Having a regular server already running isn't a problem either.
However, an embedded server locks a database file for its own
exclusive use after successful connection. This means that you cannot
access the same database from multiple embedded server processes
simultaneously (or from any other servers, once an embedded server has
locked the file).
Is the documantation right? My sample application seems to show the opposite. I had a firebird superserver installed on my pc a while ago but uninstalled it before testing this.
The document you refer to is based on Firebird 2.0 or 2.1. The 'server' architecture of Firebird Embedded on Windows was changed in Firebird 2.5. Before Firebird 2.5, Firebird Embedded on Windows behaved as SuperServer, meaning it required exclusive access to the database file.
Starting with Firebird 2.5, Firebird Embedded on Windows behaves like the SuperClassic server model, which means it uses shared access to the database files, and that the same database can be accessed by multiple Firebird Embedded applications and Firebird servers in the Classic or SuperClassic server model (but not SuperServer) if they are running on the same machine. The downside of this change is that embedded applications need to be able to create, read and write the shared database lockfiles (in C:\ProgramData\Firebird).
You don't need to worry about corruption: if the embedded engine can't access the shared lockfile, the connection will fail. The reason you can't connect with IB Expert, is probably that you attempt to connect through a Firebird server with the SuperServer model (which requires exclusive access).
See also the Firebird 2.5 release notes: Changes in the Firebird Engine:
The embedded server in the Windows library, fbembed.dll, now uses Superclassic, not Superserver as previously, thus unifying its model with that of local connection to Superclassic on POSIX. The database file-lock that previously restricted connections to a single application space is replaced by a global lock table that allows simultaneous access to the same database from different embedded server modules. This facilitates concurrent debugging of applications and use of native utility tools like gbak, gstat and so on.

Redshift with SSIS/SSDT

Has anyone been successful using Amazon Redshift as a source or destination ODBC component in SQL Server Data Tools 2012?
I've installed the PostgreSQL drivers provided by Amazon and have successfully tested a connection in the Windows ODBC driver administrator but keep running into arcane error messages when I choose my saved DSN and try to pull a table listing.
Redshift is based on quite an old version of Postgres (8.0). Postgres has changed quite a bit since then and the Postgres tools have changed with it. When downloading any tools to use with Redshift you will probably need to use previous versions from several years ago.
The table listing problem is particularly annoying but I have yet to find a version of psql that can properly list Redshift tables. As an alternative you can use the INFORMATION_SCHEMA tables to find this kind of info, and in my opinion this is what SSIS/SSDT should be doing by default.
I would not expect SSIS to be able to load data into Redshift reliably, i.e. create a Redshift destination. This is because Redshift does not really support INSERT INTO as a way to load data. If you use INSERT INTO you will only be able to load ~10 rows per second. Redshift can only load data quickly from S3 or DynamoDB using the COPY command.
It's a similar story for all other ETL tools I've tried, notably the open source tools Pentaho PDI (aka Kettle) and Talend Open Studio. This is particularly annoying in Talend's case as they have Redshift components but they actually try to use INSERT INTO for loading. Even Amazon's own ETL tool Data Pipeline does not yet have support for Redshift as 'node'.
I have been successful. Try installing both the 32-bit and 64-bit versions of the PostgreSQL ODBC drivers.
Also, in your Project Properties under 'Configuration Properties' > 'Debugging', set 'Run64BitRuntime' to False.
You can also try specifying the connection string in Connection Manager. For example:
Driver={PostgreSQL ANSI};
server=redshiftdb.d113klxjd4ac.us-west-2.redshift.amazonaws.com;uid=;database=;port=5432

How to make a test database from AS400

For SQL Server, we are able to send over the db for the most part pretty easily to offshore staff.
Is this possible with the AS/400 or they can only VPN in to work?
Every database engine has a slightly different version of SQL. DB2 for i at V5R4 has differences to DB2 LUW 9.7 and both are different to SQL Server and MySQL at any version. So the quick answer is no, you can't simply make a copy of a DB2 for i database and run it on MySQL or SQL Server. You'd normally do exactly as you are doing with SQL Server: Have one machine here and another machine there and unload/reload the data as needed.
Having said that, the differences between SQL dialects are not usually crippling. Use the IBM Navigator for i and extract all of the DDL for the IBM database, then try to execute the DDL on the SQL Server machine. You'll have some syntax problems, but you should be able to work them out with someone who is knowledgeable in both dialects. Keep track of the changes to the DDL because you'll need them in order to extract the data from the IBM side.
Once you have the empty database created on the new machine, it's time to extract out the data. Write some CL programs to do CPYTOIMPF to generate CSV files or flat files or whatever it is that SQL Server wants in order to import properly. Then FTP that data to the new machine and write some scripts to do the import.
As you can tell, this is not going to be a simple process and it will take some time to develop and debug. I'd go with having the offshore staff using a VPN to the local IBM machine.
The easiest way I can think of would be to create a Save File (SAVF) then FTP that save file to the other IBM i and [restore it] (http://pic.dhe.ibm.com/infocenter/iseries/v6r1m0/index.jsp?topic=/cl/rstobj.htm).
In the PC world this is similar to zipping up a directory, FTPing it to another machine and then unzipping it.
If this isn't what you mean, can you elaborate on what you're wanting?
The offshore site probably has their own SQL Server, probably running the same version as you.
But unless they also have an IBM Power System running the same release of IBM i, then they will most likely need to access your system.

How to transfer data from a SQL Server Database to a Oracle Database

The current application I'm working lets call X is an archiving application for the data kept another application say Y. Both are very old applications developed about 8 odd years back. So far in my reading of the documentation, I have learnt that the process to transfer data used is that, the SQL Server Database Tables snapshot is created in flat files and then this flat files are ftp'd to the correct unix box where through ctl various insert statements are generated for the Oracle Database and that's how this data is transferred. It uses bcp utility. I wanted to know if there is a better and a faster way this could be accomplished. There should be a way to transfer data directly, I feel the whole process of taking it in files and then transfer and insert must be really slow and painstaking. Any insights???
Create a DB Link from your Oracle Database to SQL Server database, and you can transfer the data via selects / inserts.
Schedule the process using DBMS_SCHEDULER if this needs to be done on a periodic basis.
you can read data from a lot of different database vendors using heterogeneous services. To use this you create a service on the Unix box that uses - in this case - odbc to connect to the SQL Server database.
You define this service in the listener.ora and you create a tns alias that points to this service. The alias looks pretty normal, except for the extra line (hs = ok). In your database you make a database link that using this tns alias as connect string.
UnixODBC in combination with the FreeTDS driver works fine.
The exact details vary between releases, for 10g look for hs4odbc, 11g dg4odbc.

Resources