Can GCM cloud server access my app server database table? - google-app-engine

I am using google cloud messaging in my web based android application. I want to send a message to all of my android apps through gcm (one by one, not simultaneously). Commonly, my web server sends request to gcm with data and then gcm sends that data to particular app. So if my database contains records of 10 apps then my web server will request gcm 10 times. Is there a way that my web server gives access of database table to gcm. Then gcm using that database table send messages to apps one by one. So my web server does not need to request the gcm server 10 times. Is it possible?
Thanks in advance for your kind reply!

There is no way Google can access your database, but you can send multicast messages to up to 1000 recipients using the registration_ids parameter instead of to in you HTTP request.
See also https://developers.google.com/cloud-messaging/server-ref#downstream
Upd.: you can also subscribe all your clients to a single topic and then send to that topic.
https://developers.google.com/cloud-messaging/topic-messaging

Related

Query triangulation

I have the following usage pattern that I'm wondering if there's a known way to deal with it.
Let's say I have a website where a user can build a query to run it against the remote database. The remote database is secure and the user will not have access to it. Therefore, the query, what will be something like: SELECT * FROM myTable will be sent to our web server, and our web server will query the remote DB on another server, receive the results and pass them back in the HTTP response. So, the flow is:
Location1 (Europe): User/browser submits HTTP POST containing the SQL Query.
Location2 (US): HTTP Server receives request, runs SQL against database:
Location3 (Asia): Database runs query, returns data
Location2 (US): HTTP Server receives SQL resultset back. Sends response.
Location1 (Europe): User/browser receives the data back in the rendered webpage.
Supposing that I don't have control of the three locations, we can see that there may be a lot of data transfer latency if the size of the resultset is large. I was wondering if there is any way to do something like the following instead, and if so how it could be done:
Location1 (Europe): User/browser submits HTTP POST containing the SQL Query.
Location2 (US): HTTP Server receives request, sends back QueryID immediately, runs SQL against database, asynchronously.
Location3 (Asia) Database runs query
Location1 (Europe): User/browser receives response from database. (How? It cannot pull directly from DB)
To summarize, if we imagine the resultset is 50MB in size, in the first case, the 50MB would go from:
Asia (DB) -> US (Server) -> Europe (Client)
and in the second case it would go from:
Asia (DB) -> Europe (Client)
You can decouple authentication with authorization to allow more flexible connections between all three entities: Browser, HTTP server, and DB.
To make your second example work you could do:
The HTTP server (US) submits asynchroneously the query to the DB (Asia) and requests a auth token for it.
The HTTP server (US) sends the auth token back to the browser (Europe), while the query is now running.
The browser (Europe) now initiates a second HTTP call against the DB (Asia) using the auth token, and maybe the queryID as well.
The DB will probably need to implement a simple token auth protocol. It should:
Authenticate the incoming auth token.
Retrieve the session.
Start streaming the query result set back to the caller.
For the DB server, there are plenty of out-of-the-box slim docker images you can spin in seconds that implement authorization server and that can listen to the browser using nginx.
As you can see the architecture can be worked out. However, the DB server in Asia will need to be revamped to implement some kind of token authorization. The simplest and widespread strategy is to use OAuth2, that is all the rage nowadays.
Building on #TheImpalers answer:
How about add another table to your remote DB that is just for retrieving query result?
When client asks the backend service for database query, the backend service will generate a UUID or other secure token and tell the DB to run the query and store it under the given UUID. The backend service also returns the UUID to the client who can then retrieve the associated data from the DB directly.
TLDR:
Europe (Client) -> US (Server) -> Asia (Server) -> Asia (DB)
Open a HTTP server in Asia (if not don't have access to same DC/server - rent a different one), then re-direct request from HTTP US -> HTTP Asia, which will connect to local DB & stream the response.
Redirect can either be a public one (302) or a private proxying over VPN if you care about latency & have such possibility.
Frontend talking to DB directly is not a very good pattern, because you can't do any middleware operations that you'll need in a long term (breaking changes, analytics, authorization, redirects, rate-limiting, scalability...)
If your SQL is very heavy & you can't do sync requests with long-lasting TCP connections, set up streaming over websocket server (also in Asia).

How to connect sql server with swift

I'm working now on an application for iOS (using swift), the database is already exist in SQL Server.
How I will use it and connect with it? Do i need a web service to do that?
thanks all .
It is recommended to use a web service since having the application talk directly to the database means you need to include the SQL Credentials in the binary and anyone with a copy of the application can get them and do whatever they wish in the database. From a security point of view, this is bad.
The correct approach is to have a web server which will host an "API" -- a web application that will receive HTTP requests from the app and translate them to database queries and then will return the response in another format, such as JSON.
However, you need to be careful. This web services must use HTTPS and must first validate the input in order to protect against attacks such as SQL Injection.

sending mass email using java mail API

I have a requirement of sending emails to approximately 15,000 email addresses. Content of the email is same for all. I talked to my mail server administrator and according to him I can send only 500 emails/ hour. I wrote an utility using java mail API to achieve this. I am creating a connection(transport.connect()) and then reusing it. My utility will be running for approx 30 hours to send all 15,000 emails.
The question I have "Is there any limit on number of emails being sent per connection? And is there any time out issues I could run into? Should I close the connection and get a new connection at some interval? like after sending 100 emails or after 1 hour?"
The answers to all your questions depend on your mail server, not on JavaMail. Talk to your mail server administrator again.

SQL Server Event Notifications & Service Broker - minimum req'd for multiple servers?

I'm trying to figure out the easiest way to send SQL Server Event Notifications to a separate server using service broker. I've built an endpoint on each server, a queue on each server, working on Dialogs and Contracts and activation... but do I need any of that?
CREATE EVENT NOTIFICATION says it can send the notification XML to a "target service" - so could I just create a contract on the "sending" server that points to a queue on a "receiving server", and use activation there?
Or do I need to have it send to a local queue and then forward on to the receiving server's queue? Thanks!
You can target the remote service, but you have to have the ROUTEs defined for bidirectional communication so that you get the Acknowledgement message back. I once had a script for creating a centralized processing server for all Event Notifications, and the other servers targeted it's service. If I can find it I'll post it on my blog and update this with a link.

ADO.NET Data Service API Versioning

We created an ADO.NET Services on top of our EDMX file as the main entry point for our central application. In the near future a lot of distinct applications will show up and consume our REST Service.
So far, so good but there is one thing I'm missing. I don't want to update all my consumers each time a new version of the ADO.NET Data Services is published. How can I achieve such a legacy compliance?
Thank you,
Stéphane.
The data services client and server do not do version negotation at connection time - they do it for every request. Each request or respond includes a version header that indicates what version of client or server is required to service that request. This means that a downlevel client can communicate with an up-level server so long as the server can respond to those requests without doing anything that requires it to up the version number of the response. Features that require the service to use higher version responses are all off by default.
What this means is that as new version of Data Services are published, the client and server will continue to be able to communicate with each other regardless of which version is installed on the client so long as new features have not been enabled on the server that require a higher version client to respond.

Resources