Azure Mobile Service and Concurrency on database actions? - sql-server

I recently read into Azure Tables and I the system has an implemented E-tag check for checking concurrent actions. I assume that for Azure Mobile Service, each of the insert and update methods etc. are atomic, however, I have been hard pressed to find any real information on concurrent data access. Should I want such to be implemented, is it up to me to implement it or does Azure Mobile Service implement some kind of concurrency handling system.
A basic use case I am looking into is the most basic
User 1 gets object A
User 2 gets object A
User 2 saves object A
User 1 saves object A -> This should result in a fault
Is it up to me to implement this? And how should I go about it? My first instinct would be to manually add an E-tag field for the object that is checked by a server-side script. Is there a better approach?

My best guess is that because WAMS is using SQL tables that it uses Optimistic Locking. So, I think the E-Tag is the way to go.
The following articles should shed some light on SQL for Azure:
Windows Azure Storage and Concurrent Access
Best Practices for the Design of Large-Scale Services on Windows Azure Cloud Services
How to get most out of Windows Azure Tables

Related

Azure Mobile Service concurrency handling in SQL?

I am implementing an Azure Mobile Service and I have a set of objects that can be accessed by multiple users, potentially at the same time. My problem is that I can't really find a lot of information on how to handle the potential concurrency issues that might result from this.
I know "Azure Tables" implements an E-Tag check. Is there an out-of-the-box solution for Azure Mobile Services for this one with SQL ? If not, what is the approach I should be going for here.
Should I just implement the E-Tag check by hand? I could include a GUID of the object that is generated every time the object is saved and checked when saving. This should be relatively safe way to do it?
For conflicts between multiple clients, you would need to add a detection/resolution mechanism. You could use a "timestamp" (typically a sequentially incremented version number) in your table schema and check it in your update script. You could fail an update if the timestamp used by a client for reading is older than the current timestamp.
If you want to use ETags via http headers, you could use Custom APIs. We are looking into enabling CRUD scripts to also set headers but that is not available today. Separately, we are looking into the offline scenarios as well.
(Prog Manager, Windows Azure Mobile Services)

Connect to the cloud directly to the database or through a service?

it might be a simple question but as I couldn't find the best answer on google I would like to know your thoghts.
I'm thinking of changing a software I've made in WPF accessing its data from a local server to a cloud server (maybe Azure).
What's the best way, connect directly to the database or access through a service in the clould (that would have to be developed by me I guess).
Thanks!!!
In general, I would guard against directly accessing a database hosted in the cloud via a client application. You'll be exposing your database endpoint through the public internet providing a significant attack vector.
By using a service, you can limit that attack vector. The service itself is also 'exposed' but can be locked down (typically more effectively/easily) with authentication/authorization protocols like OAuth, AD, etc. AND the service itself would expose only the operations necessary for the client application, versus access to the complete database schema (should someone crack the password when the database is on the open internet).
You didn't mention if you were planning to use Windows Azure SQL Azure or your own database in IaaS. You can, of course, implement your own security via firewall, etc. on VMs hosted in Windows Azure, but that's another infrastructure task you'll need to accommodate, and if your client IPs change, etc., then managing that is not insignificant.
I think the answer to the question would be the same whether you'd be on Azure or not.
A service adds an abstraction layer between the application and the database, which may help with maintenance in the long term, but it does have a cost associated with it (in terms of initial effort) and some potentially performance penalties (although this does not have to be significant) so in the end you'll have to weight it according to the application.
I really do not think there's a one-size-fits-all answer to this.

Is there a way to prevent users from doing bulk entries in a Postgresql Database

I have 4 new data entry users who are using a particular GUI to create/update/delete entries in our main database. The "GUI" client allows them to see database records on a map and make modifications there, which is fine and preferred way of doing it.
But lately lot of guys have been accessing local database directly using PGAdmin and running bulk queries (i.e. update, insert, delete,etc) which introduces lot of problems like people updating lot of records without knowing or making mistakes while setting values. It also effects our logging procedures as we are calculating averages and time stamps for reporting purposes which are quite crucial to us.
So is there a way to prevent users from using PGAdmin (please remember lot of these guys are working from home and we do not have access to their machines) and running SQL queries directly in the database.
We still have to give them access to certain tables and allow them to execute sql as long as it's coming through a certain client but deny access to same user when he/she tries to execute a query directly in the db.
The only sane way to control access to your database is converting your db access methods to 3-tier structure. You should build a middleware (maybe some rest API or something alike) and use this API from your app. Database should be hidden behind this middleware, so no direct access is possible. From DB point of view, there are no ways to tell if one database connection is from your app, or from some other tool (pgadmin, simple psql or some custom build client). Your database should be accessible only from trusted hosts and clients should not have access to those hosts.
This is only possible if you use a trick (which might get exploited, too, but maybe your users are not smart enought).
In your client app set some harmless parameter like geqo_pool_size=1001 (if it is 1000 normally).
Now write a trigger that checks if this parameter is set and outputs "No access through PGAdmin" if this parameter is not set like from your app (and the username is not your admin username).
Alternatives: Create a temporary table and check for its existance.
I believe you should block direct access to the database, and set an application to which your clients (humans and software ones) will be able to connect.
Let this application filter and pass only allowed commands.
A great care should be taken in the filtering - I would carefully think whether raw SQL would be allowed at all. Personally, I would design some simplified API, which would make me sure that a hypothetical client-attacker (In God we trust, all others we monitor) would not find a way to sneak with some dangerous modification.
I suppose that from security standpoint your current approach is very unsafe.
You should study advanced pg_hba.conf settings.
this file is the key point for use authorization. Basic settings imply only simple authentification methods like passwords and lists of IP, but you can have some more advanced solution.
GSSAPI
kerberos
SSPI
Radius server
any pam method
So your official client can use a more advanced method, like somthing with a third tier API, some really complex authentification mechanism. Then without using the application it will at least becomes difficult to redo these tasks. If the kerberos key is encrypted in your client, for example.
What you want to do is to REVOKE your users write access, then create a new role with write access, then as this role you CREATE FUNCTION defined as SECURITY DEFINER, which updates the table in a way you allow with integrity checks, then GRANT EXECUTE access to this function for your users.
There is an answer on this topic on ServerFault which references the following blog entry with detailed description.
I believe that using middleware as other answers suggest is an unnecessary overkill in your situation. The above solution does not require for the users to change the way they access the database, just restricts their right to modify the data only through the predefined server side methods.

How to protect a database?

There is a website with a server database. I'm building a desktop application which uses data from one of the tables. Hacker can just take password from assembly.
How can I protect the database?
I wouldn't store the database information in the application at all. Instead, I would create an API to the database on the website, perhaps implementing a RESTful interface or having queries that return data in an appropriate format, such as JSON, XML, or even plain text. The application could then call these web services and process the results. All of your database information stays on the server, where it is (hopefully) secure.
The API adds a sometimes unnecessary application layer. Not all applications i've been involved with easily convert from using database calls to webservice calls. If the application has not been written i guess it would not matter that much.
My alternative implementation is:
Connect to the server using a secure tunnel of some sort.
Save the password encrypted on disk.
This would save me the effort of creating an API, which in most of my projects would be a waste of time.
This alternative is not viable if let's say you want to distribute the application to customers.
Your can
A) create a three tier system. Your client could interface with a server that in turn interfaces with the database. The server stores the access credentials.
B) create personal accounts on the database for your users. This two tier model is applicable if fine grained access control to data is needed. E.g. in an inhouse application with different user roles.
Don't let the database user the application logs in as perform any write operations or read operations on anything but the application data.
Or, choose a sane architecture, as Thomas mentions above. Databases are for storing and retrieving data, they are not a generic application server.

Web services and database concurrency

I'm building a .NET client application (C#, WinForms) that uses a web service for interaction with the database. The client will be run from remote locations using a WAN or VPN, hence the idea of using a web service rather than direct database access.
The issue I'm grappling with right now is how to handle database concurrency. That is, if two people from different locations update the same data, how do I deal with it? I'm considering using timestamps on each database record and having that as part of the update where clauses, but that means that the timestamps have to move back and forth through the web service interface, which seems kind of ugly.
What is the best way to approach this?
I don't think you want your web service to talk directly to the database. You probably want your service to interact with some type of business components who in turn interact with a data access layer. Any concurrency exceptions can be passed from the DAL up to the business layer where they can be handled so that the web service never has to see the timestamps.
But if you are passing something like a data table up to the client and you want to avoid timestamps, you can do concurrency checking by comparing field by field. The Table Adapter wizards generate this type of concurrency checking by default if you ask for optimistic concurrency checking.
If your collisions occur infrequently enough that they can be resolved manually, a simple solution is to add an update trigger that copies a row's pre-update values to an audit table. This way the most recent write is the "winner", but no data is ever lost to an overwrite, and an administrator can restore an earlier row state or even combine them.
This technique has its downsides, and is not a very good solution where frequent overwrites are common.
Also, this is slightly off-topic, but using web services isn't necessarily the way to go just because the clients will be remoting into the network. ASP.NET web services are XML-based and very verbose. If your client application can count on being always connected, you'd be better off not using web services.

Resources