Amazon RDS and server loadable functions [duplicate] - database

I need to send an HTTP request when a database changes, so I am using the mysqludf extension.
It works locally, but how can I get it working on Amazon RDS too? If it's not possible, I need a solution to use a MySQL trigger together with the sys_exec function or something similar.
Can someone can help me?
Thanks!

Definitely not. RDS instances are locked down in two ways that prevent you from installing UDFs; not to be confused with a stored functions, UDFs are written in C and compiled. The binary shared object file then has to be copied to the filesystem of the MySQL Server, which is not accessible to you with RDS (that's one) and you have to have the SUPER privilege to actually load the UDF plugin from the file, which RDS does not provide (that's two).
Additionally, you can't use sys_exec() or sys_eval() on RDS, because those functions aren't built-in. They're UDF plugins, too.
This is one of the tradeoffs with RDS (or any "managed" database server service, I suspect). In exchange for simplicity and point & click provisioning, there are some things that you give up.
There is not a way to do what you want, in RDS.

If your final requirement is to call some external API by taking a trigger from some sql operation on RDS database, you can leverage AWS SNS and lambda to achieve this. Its not a straight road,but will serve the purpose. In fact I have used this workaround to meet one of my requirement. You can find the thread here.

There is not a way to do what you want, in RDS.
It is not possible to use UDF but there is more elegant way to satisfy your needs.
MySQL trigger function running on AWS Aurora can call AWS Lambda function. That function can do anything you want - call REST API, publish a message to SQS, whatever.
https://aws.amazon.com/blogs/database/capturing-data-changes-in-amazon-aurora-using-aws-lambda/
And yes, I'm very late to the party but I'm posting this solution for random googlers like me :)

Related

Database queries as application healthchecks - management tool

Hey there fellow Stackoverflowers,
In our company we have several application stacks running on different types of databases (MySQL, PostgreSQL, MS SQL, Azure SQL,..). For monitoring purposes we use some scripted queries on the databases of all these application stacks, with Nagios reporting back the results in an email.
Now, since our support team would also like easy access to these queries in order to manually run them or modify them, we were considering building an application specifically designed to be able to store, run and modify queries that can be executed on any of the above listed database types and offering both a user-friendly webinterface and a REST API with JSON output for our new reporting stack based on SENSU, to be deployed in a few months.
My personal belief is that a tool like this must already be out there, since the use case for it is so generic. However, googling did not yield any results even closely resembling what I am looking for.
So my question to you is: Do you know of such a tool? If you had to build it yourself: what would your approach be? We're mostly a Java/C++ team, but are open to all options.
Some or may be all of this stuff can be done by an existing API called NAGIRA. Look it up on Google. This will definitely give you all the results in JSON format. Also i think it would allow you to run checks manually. So you can may be build a little front end and call this API to achieve what you want.
A little late of a reply, but check out http://cloudmonix.com -- it offers ability to create metrics based on custom SQL queries, supports SQL Azure, SQL Server, MySQL, and Oracle. Also integrates with Nagios (and Zabbix)

Is there a generic oData provider for SQL Server?

I've created a gadget for our CRM consultants that allows them to present data from an oData source in CRM. At the moment, it will connect to any data source but for customer sites we need to develop an oData service using WCF each time for each data source.
Does anyone know if there's a decent generic tool out there that can retrieve data from SQL Server, present it (via IIS) as oData and that can be configured without Visual Studio by a non-developer?
We (the WCF Data Services team) have heard this ask a couple of times; what follows are a few of my thoughts in no particular order.
We haven't heard the ask a lot. There's a reasonable amount of work to do here, and without sufficient asks it's hard to justify. That said, there's nothing stopping the community from spinning up an effort to achieve this (hint, hint :)).
There's a number of questions you would need to answer. For instance, what sort of default limitations would the provider have? Would you really want to allow arbitrary expands on something that's probably a production database server? What about permissions? What about read/write?
What happens for mutable schemas? Is this a completely dynamic provider? How much overhead is there in scanning the database schema, and how frequently would the database schema need to be scanned?
How would clients take advantage of a dynamic OData service? Most clients use some form of code generation to make interacting with the service easier.
These thoughts aren't really intended to dissuade at all, but hopefully they give you some things to think about should you attempt to create a generic provider on your own. If you do so, I'd love to hear about it.

Is there a way to prevent users from doing bulk entries in a Postgresql Database

I have 4 new data entry users who are using a particular GUI to create/update/delete entries in our main database. The "GUI" client allows them to see database records on a map and make modifications there, which is fine and preferred way of doing it.
But lately lot of guys have been accessing local database directly using PGAdmin and running bulk queries (i.e. update, insert, delete,etc) which introduces lot of problems like people updating lot of records without knowing or making mistakes while setting values. It also effects our logging procedures as we are calculating averages and time stamps for reporting purposes which are quite crucial to us.
So is there a way to prevent users from using PGAdmin (please remember lot of these guys are working from home and we do not have access to their machines) and running SQL queries directly in the database.
We still have to give them access to certain tables and allow them to execute sql as long as it's coming through a certain client but deny access to same user when he/she tries to execute a query directly in the db.
The only sane way to control access to your database is converting your db access methods to 3-tier structure. You should build a middleware (maybe some rest API or something alike) and use this API from your app. Database should be hidden behind this middleware, so no direct access is possible. From DB point of view, there are no ways to tell if one database connection is from your app, or from some other tool (pgadmin, simple psql or some custom build client). Your database should be accessible only from trusted hosts and clients should not have access to those hosts.
This is only possible if you use a trick (which might get exploited, too, but maybe your users are not smart enought).
In your client app set some harmless parameter like geqo_pool_size=1001 (if it is 1000 normally).
Now write a trigger that checks if this parameter is set and outputs "No access through PGAdmin" if this parameter is not set like from your app (and the username is not your admin username).
Alternatives: Create a temporary table and check for its existance.
I believe you should block direct access to the database, and set an application to which your clients (humans and software ones) will be able to connect.
Let this application filter and pass only allowed commands.
A great care should be taken in the filtering - I would carefully think whether raw SQL would be allowed at all. Personally, I would design some simplified API, which would make me sure that a hypothetical client-attacker (In God we trust, all others we monitor) would not find a way to sneak with some dangerous modification.
I suppose that from security standpoint your current approach is very unsafe.
You should study advanced pg_hba.conf settings.
this file is the key point for use authorization. Basic settings imply only simple authentification methods like passwords and lists of IP, but you can have some more advanced solution.
GSSAPI
kerberos
SSPI
Radius server
any pam method
So your official client can use a more advanced method, like somthing with a third tier API, some really complex authentification mechanism. Then without using the application it will at least becomes difficult to redo these tasks. If the kerberos key is encrypted in your client, for example.
What you want to do is to REVOKE your users write access, then create a new role with write access, then as this role you CREATE FUNCTION defined as SECURITY DEFINER, which updates the table in a way you allow with integrity checks, then GRANT EXECUTE access to this function for your users.
There is an answer on this topic on ServerFault which references the following blog entry with detailed description.
I believe that using middleware as other answers suggest is an unnecessary overkill in your situation. The above solution does not require for the users to change the way they access the database, just restricts their right to modify the data only through the predefined server side methods.

Is it possible to have separate SQLite databases within the same Django project?

I was considering creating a separate SQLite database for certain apps on a Django project.
However, I did not want to use direct SQLite access if possible.
Django-style ORM access to these database would be ideal.
Is this possible?
Thank you.
Yes - the low-level API for this is in place, it's just missing a convenient high-level API at the moment. These quotes are from James Bennett (Django's release manager) on programming reddit:
It's been there -- in an extremely low-level API for those who look at the codebase -- for months now (every QuerySet is backed by a Query, which in turn accepts a DB connection as an argument). There isn't any high-level documented API for it, but I know people who are already doing and have been doing stuff like multiple-DB/sharding scenarios.
...it's not necessarily something that needs a big write-up; the __init__() method of QuerySet accepts a keyword argument query, which should be an instance of django.db.models.sql.Query. The __init__() method of Query, in turn, accepts a keyword argument connection, which should be an instance of (a backend-specific subclass for your DB of) django.db.backends.BaseDatabaseWrapper.
From there, it's pretty easy; you could, for example, override get_query_set() on a manager to always return a QuerySet using the connection you want, or set up things like sharding logic to figure out which DB to use based on incoming query parameters, etc., etc.
Already supported http://docs.djangoproject.com/en/dev/topics/db/multi-db/
Currently no -- each project uses one database, and every app must exist within it. If you want to have an app-specific database, you cannot do so through the Django ORM. See the Django wiki page on Multiple Database Support.
This isn't possible yet, but there is some talk of it on the wiki, Multiple Database Support in Django. It was also brought up during the keynote on the future of Django at DjangoCon 2008 and made one of the higher priority issues.

Accessing Sharepoint from outside the WebUI

Is it possible to access the database backend of a sharepoint server? My company uses Sharepoint to store data and pictures of various assets. Ideally I would be able to access the data and display it in my application to allow users both methods of access.
Before I go talk to the IT department I would like to find out if this is even possible?
Edit: From rails on linux? (Yes, I know I'm crazy)
Agree with Adam. Querying the Sharepoint Database is a big no-no, as Microsoft does not guarantee that the Schema is in any way stable. Only access the database if there is really no other way.
As for Sharepoint, usually the Lists.asmx Web Service is what you want to look at first.
http://www.c-sharpcorner.com/UploadFile/mahesh/WSSInNet01302007093018AM/WSSInNet.aspx
http://geekswithblogs.net/mcassell/archive/2007/08/22/Accessing-Sharepoint-Data-through-Web-Services.aspx
yikes! :)
look at the web service and .net API before going direct to the database. i've used both and they provide plenty of flexibility (including building your own web services on top of the API if necessary). API for on server clients, web services for off server clients.
Just a small comment. Never ever go to the database direct. If there is no way to do it via published and supported API's, then there is no way to do it. End of story. This applies even to when you are "just reading data", as this can still cause significant issues.
Just to support the above if you ever take a look at the SQL tables that sit behind SharePoint you'll realise why its not recommnded or supported to access the database direct. MADNESS!

Resources