Automate dropping a database and creating tables - database

I made a Spotify app that analyzes user data and manages interactive features by writing the API responses to a PostgreSQL database. The developer rules state that basically I have to delete the data when the user is not actively using my app.
Is there a way to automate this on the server (I'm using AWS Lightsail/Ubuntu) to do it daily? Would I need to add a datetime column to all of my tables and follow one of these: https://www.the-art-of-web.com/sql/trigger-delete-old/? Or is there a better way?

Related

Snowflake → Zapier Integration

I'm using Zapier with Redshift to fetch data from custom queries and trigger a wide array of actions when new rows are detected from either a table or custom query, including sending emails through Gmail or Mailchimp, exporting data to Google Sheets, and more. Zapier's UI enables our non-technical product stakeholders to take over these workflows and customize them as needed. Zapier has several integrations built for Postgres, and since Redshift supports the Postgres protocol, these custom workflows can be easily built in Zapier.
I'm switching our data warehouse from Redshift to Snowflake and the final obstacle is moving these Zapier Integrations. Snowflake doesn't support the Postgres protocol so it cannot be used as a drop in replacement for these workflows. No other data source has all the information that we need for these workflows so connecting to an upstream datasource of Snowflake is not an option. Would appreciate guidance on alternatives I could pursue, including the following:
Moving these workflows into application code
Using a foreign data wrapper in Postgres for Snowflake to continue using the existing workflows from a dummy Postgres instance
Using custom-code blocks in Zapier instead of the Postgres integration
I'm not sure if Snowflake has an API that will allow you to do what you want, but you can create a private Zapier Integration that will have all the same features and permissions as a public integration, but you can customize it for your team.
There's info about that process here: https://platform.zapier.com/
You might find it easier to use a vendor solution like Census to forward rows as events to Zapier. Their free plan is pretty sizeable for getting started. More info here https://www.getcensus.com/integrations/zapier

How can I create a Session in Azure SQL using Database Versioning concept for where two or more users where they can make changes and submit it

I would like to provide my users with a session or workspace using Azure SQL DB where they can take a snap shot of the database and cook up their changes, analyze the result and then submit the final changes, so that all the other users can see it.
Do you think if the Temporary Tables in SQL Server is the answer?
Do we have some middleware in the market which can be used on top of SQL server to create sessions, manage sessions and post the session data back to master DB version with proper Reconciliation of data between the version created by user and the current master version of DB?
I have seen a middleware ArcSDE from Esri which used to do it for complex Geodatabases but I am struggling to find similar Middleware for normal Azure-SQL RDBMS.

How do I replicate data from the Common Data Service to SQL Server on Azure?

I have data in Microsoft's Common Data Service (from Microsoft Dynamics for Talent). I can't use the Data Management Framework as the data in question is in entities that are not available through the DMF.
How do I replicate the data in the CDS back a SQL database?
What I've tried so far is to create a logic app (and flow, neither worked) that grabs data using the CDS connector and pushes it into an SQL database, but there are several problems with this:
It's a maintenance burden
It's extremely error tedious to add new tables, etc. I have written a somehwat horendous stored proc that tries to create a table based on the data given to it from the json-ified data from the flow, but this is very error prone.
It doesn't work at all, since the size of the data exceeds some kind of limitation in the SQL connector and I get spurious errors.
Rather than trying to push through with these issues, I'd rather ask whether there's a better way to achieve this. With the Data Management Framework in Dynamics it was simply a matter of scheduling these sync jobs, which worked pretty well. Is there something similar with CDS?
I've also tried looking at the Data Integration projects in Powerapps, but these only seem to allow me to get data into Powerapps/CDS, not back out...
Common Data Service for Apps provides access to the data using the user interfaces or API, there is no direct access to the underlying database. This architecture has certain limitations when it comes to processing large volumes of data, for example for the purposes of data warehousing, reporting, or using Azure machine learning and analytics tools. Replicating CDS data using Extract, Transform, Load (ETL) tools is possible but inherently complex to maintain.
Data Export Service is a service made available on Microsoft AppSource that adds the ability to replicate Dynamics 365 for Customer Engagement apps data to an Azure SQL Database store in a customer-owned Azure subscription.
Note: The Data Export Service requires Dynamics 365 for Customer Engagement apps subscription, it is not available on Common Data Service for Apps plans.

Auditing sql server changes with triggers and different user id from nodejs

I'm building a 3-tier application with AngularJS, NodeJS+Express and SQL Server. I'm now sending each individual query from the backend to the database to be executed. Thought that's working well, I have now the legal requirement to audit many changes in the database.
For that, I thought about using triggers. However, the problem is that the user identified in the web application is different from the generic user logged into the database, so I don't know how to deal with this.
I thought about converting every query in the backend to a call to a stored procedure, passing each time the user id as a extra field from the frontend to the backend. However, the user id I get here is not the database user. I can't use a temporary table, as I have read in other posts, because this database is about to be accessed at the same time by thousands of users, so it's not secure in my case. I also saw others solutions, but applied to Postre (I need a generic solution, because this application has different versions, working with SQL Server, Oracle and MySQL). Finally, I can't also write code nor configurations in the server for each user, because potential clients will try the trial version without human intervention.
I need a solution that doesn't overload the performance, so it's important for the success of this application.
Thank you very much.

Firebase for a Web Application

I am currently working on a new Web Application. I really love the idea behind firebase (augularfire) for the realtime data sync. But I can't figure out how to organize all the data, make each customer (enterprise) have his own data, and ensure no data is shared between each enterprise.
In a regular MySql server, I can create a database per enterprise (best implementation for speed and security) or simply add a table Enterprise and a table Customer with enterprise_id. Which is the best approach in a Firebase DB?

Resources