Inserting data from SQL server into a RESTful API via SSIS - sql-server

I am investigating ways to move data from SQL Server into system exposed via a RESTful HTTP API.
If I were to use SSIS would I have to write a custom connector to push the data to the HTTP API after the transform step, or is there a built in feature that supports pushing to an HTTP API?

If you only want to move a very small amount of data, you could use the Web Services Task
...but note that pushing data out of SQL Server is not what this task is intended for...
The Web Service task executes a Web service method. You can use the
Web Service task for the following purposes:
Writing to a variable the values that a Web service method returns.
For example, you could obtain the highest temperature of the day from
a Web service method, and then use that value to update a variable
that is used in an expression that sets a column value.
Writing to a file the values that a Web service method returns. For
example, a list of potential customers can be written to a file and
the file then used as a data source in a package that cleans the data
before it is written to a database.
For more control, you'll want to look at using the Script Component in a data flow. Much more flexibility/control.

Related

How should I push and pull data from a server using REST API and Later generate reports from it

I am new to REST API. What I basically understand from REST API is you need to call it each time to get the updated data.
In my case, I need to use the data received from REST API to generate reports in PowerBI. Adding on, I should be able to "read" the data coming from the server and "write" to the data as well.
I did find the option of getting data from WEB in PowerBI to connect directly to REST API. So, if I do that I can only "read".
Can you help me with different options on how can I do both "read/pull" and "write/push" to the server when I have its REST API? I am not sure if a cloud has to be used in-between.

Ways to Send Snowflake Data to a REST API (POST)

I was wondering if anyone has sent data from Snowflake to an API (POST Request).
What is the best way to do that?
Thinking of using Snowflake to unload (COPY INTO) Azure blob storage then creating a script to send that data to an API. Or I could just use the Snowflake API directly all within a script and avoid blob storage.
Curious about what other people have done.
To send data to an API you will need to have a script running outside of Snowflake.
Then with Snowflake external functions you can trigger that script from within Snowflake - and you can send parameters to it too.
I did something similar here:
https://towardsdatascience.com/forecasts-in-snowflake-facebook-prophet-on-cloud-run-with-sql-71c6f7fdc4e3
The basic steps on that post are:
Have a script that runs Facebook Prophet inside a container that runs on Cloud Run.
Set up a Snowflake external function that calls the GCP proxy that calls that script with the parameters I need.
In your case I would look for a similar setup, with the script running within Azure.
https://docs.snowflake.com/en/sql-reference/external-functions-creating-azure.html

How to create an online-offline application using servicestack

I'm trying to figure out how to create an offline / online approch to use within a huge application.
Right now, each part of the application has its own model and datalayer, who directly read / write data from / to SQL. My boss is asking me to create a kind of buffer that, in case of connectivity failure, might be used to store data until the connection to SQL return active.
What I'm trying to create is something like this: move all datalayers into a servicestack service. Each "GET" method should query the database and store the result into a cache to be reused once the connection to SQL is not available. Each "POST" and "PUT" method must execute their actions or store the request into a cache if the connection fail. this cache must be cleared once the connection to SQL is restored.
How can I achieve this? Mine is a WPF application running on Windows 10.
Best regards
Enrico
Maintaining caches on the server is not going to help create an offline Application given the client wouldn't have access to the server in order to retrieve those caches. What you'd need instead is to maintain state on the client so in the event that network access is lost the client is loading from its own local caches.
Architecturally this is easiest achieved with a Web App using a Single Page App framework like Vue (+ Vuex) or React (+ Redux or MobX). The ServiceStack TechStacks and Gistlyn Apps are good (well documented) examples of this where they store client state in a Vuex store (for TechStacks created in Vue) or Redux Store (for Gistlyn created in React), or the Old TechStacks (created with AngularJS).
For good examples of this checkout Gistlyn's snapshots feature where the entire client state can be restored from a single serialized JSON object or approach used the Real Time Network Traveler example where an initial client state and delta's can be serialized across the network to enable real-time remote control of multiple connected clients.
They weren't developed with offline in mind, but their architecture naturally leads to being offline capable, courtesy of each page being first loaded from its local store then it fires off a Request to update its local cache which thanks to the reactivity of JS SPA fx's, the page is automatically updated with the latest version of the server.
Messaging APIs
HTTP has synchronous tight coupling which isn't ideal for offline communication, what you want instead is to design your write APIs so they're One Way/Asynchronous so you can implement a message queue on the client which queues up Request DTOs and sends them reliably to the server by resending them (using an exponential backoff) until the succeed without error. Then for cases where the client needs to be notified that their request has been processed they can either be done via Server Events or via the client long-polling the server checking to see if their request has been processed.

Best approach to write generic azure logic app/azure functions to do FTP operations

I would like to build a FTP service using azure logic apps/azure functions. I would like the logic app to be invoked via HTTP request (will expose this app as REST API later). The FTP server details like directory, username, password will be sent in the request.
Is there a way by which I can have my logic app to create FTP connector dynamically based on the incoming request and then do a FTP upload or download?
You cannot create a connection at Logic Apps run time, it need to happen at author time. If it's a pre-defined list of connections, you can create them first, then use switch-case to branch into the connection that should be used at run time.

Where to find the OSB Business service configuration details in the underlying database?

In OSB Layer when the endpoint uri is changed, I need to alert the core group that the endpoint has changed and to review it. I tried SLA Alert rules but it does not have options for it. My question is, the endpoint uri should be saved somewhere in the underlying database. If so what is the schema and the table name to query it.
URI or in fact any other part of OSB artifact is not stored in relational database but rather kept in memory in it's original XML structure. It can be only accessed thru dedicated session management API. Interfaces you will need to use are part o com.bea.wli.sb.management.configuration and com.bea.wli.sb.management.query packages. Unfortunately it is not as straightforward as it sounds, in short, to extract URI information you will need to:
Create session instance(SessionManagementMBean)
Obtain ALSBConfigurationMBean instance that operates on SessionManagementMBean
Create Query object instance(BusinessServiceQuery) an run it on ALSBConfigurationMBean to get ref object to osb artifact of your interest
Invoke getServiceDefinition on your ref object to get XML service
definition
Extract URI from XML service definition with XPath
Downside of this approach is that you are basically pooling configuration each time you want to check if anything has changed.
More information including JAVA/WLST examples can be found in Oracle Fusion Middleware Java API Reference for Oracle Service Bus
There is also a good blog post describing OSB customization with WLST ALSB/OSB customization using WLST
The information about services and all its properties can be obtained via Java API. The API documentation contains sample code, so you can get it up and running quite quickly, see the Querying resources paragraph when following the given link.
We use the API to read the service (both proxy and business) configuration and for simple management.
As long as you only read the properties you do not need to handle management sessions. Once you change the values, you need to start a session and activate it once you are done -- a very similar approach to Service bus console.

Resources