I have an Oracle database that has some tables. I need to migrate the table entries from Oracle to Postgres database. My goal is to write a procedure in PL/SQL that would take an ID as an input parameter. For that ID it would generate an SQL script that adds the required columns (from oracle db) into the Posgres db. How can I export the SQL code from Oracle to Postgres?
Related
I have a result set I need to pull in from a linked DB2 server table into SQL Server. The table is huge, and I don't want or need to pull the whole thing, I only need the records for a handful of users. The problem is the User IDs are stored in a SQL Server table, not on the DB2 table. While I have select privileges on the DB2 server, I cannot create a table there, so as far as I'm aware I cannot upload the table with User IDs onto the DB2 server. Is there a way to limit the result set pulled from the DB2 server on the User IDs stored in the SQL Server table?
I am unable migrating data from SQL server to aurora MySQL DB using AWS DMS service. Currently, I have a different number of columns for a particular table in MySQL DB due to which I am unable to transfer data from SQL server to aurora MySQL DB.
Please check the below image for reference.
As the picture suggests, I want to transfer data from booking table in SQL server to booking table in aurora Mysql DB having less number of columns.
Can anyone suggest a way to do it?
There may be a way to cope with this directly from AWS without having to change your schema. But, one option here would be to simply create a table in MySQL which matches the column/type count of the counterpart in SQL Server, i.e.
CREATE TABLE booking (bookingID int, bookingVersion int, ...) -- 7 columns
Then, migrate the data from SQL Server to MySQL using the AWS tool. Finally, just drop the bookingVersion column from your MySQL table:
ALTER TABLE booking DROP COLUMN bookingVersion;
This should work, because all the extra DML steps I suggested can completely be done within MySQL, and don't involve your SQL Server database at all.
I have tables in SQL Server which needs to cloned as hive tables. Since the create table schema will be different in both the cases,
I simply cannot use the create table statement from SQL Server or from any other RDBMS in hive.
I am trying to create a configurable script where one can provide the table schema as an input and it would provide me hive create table statements or a hql file with the create table schema
has anyone tried something similar.
In Oracle we have a dynamic view v$parameter that lists the database parameters and their values.
Do we have an equivalent view or sys table or a procedure that gives similar data about SQLserver databases?
I have a database in SQL Server which contains collected data during one day, and a database in PostgreSQL with OSM data. I need to modify collected data in order to create reports for my users.
Now, I imagined that somehow call PostgreSQL procedure from SQL Server, pass collected data to PostgreSQL, do something with that data, and return another result set to SQL Server for creating reports.
What is the 'most efficient' way for achieving this? OR, better question atm, what is the way for achieving this functionality?
My idea is to connect SQL Server and PostgreSQL with PostgreSQL ODBC driver, then copy data from SQL Server to PostgreSQL table, run that stored procedure on PostgreSQL, and return data to SQL Server result table. But, it is not scheduled task. Data to be transferred to PostgreSQL contains latitude, longitude and bearing for about 2-3 million of rows and function which analyses them requires one per one record, not all at once.
My way is using C# and npgsql to connect to postgres.
I create an app:
check SQL Server every min.
check Postgres for the last id inserted.
create a dataset from MSQL with the news ids
insert the new records into Postgres.
run postgres store procedure to generata new data
create separated webservice to consume the report generated on postgres.