I'm writing a NGINX module in C that requires access to Redis.
To make those calls non-blocking, I want to use asynchronous access to it (either with official C API for Redis or using redis2-nginx-module).
I've read Emiller's Guide and it seems to me that I need to build a "chain" of modules. Something like:
My module parses HTTP request, makes corresponding request to the other module, that works with Redis asynchrounously, as "upstream"(?)
On the Redis response, the control is being returned to my module and it finalizes HTTP response, sending data back to server.
What I don't get is how to implement those chains? I hardly can find the good example, all builtin NGINX modules seem to redirect control to itself (u = r->upstream;) ? Any way to specify other module as an upstream?
Appreciate your help with a good code example of chaining.
Finally I decided to partially implement the logic using LUA interface (http://wiki.nginx.org/HttpLuaModule + https://github.com/openresty/lua-resty-redis).
It basically works in non-blocking way using nginx subrequests but provides much easier way to write the module.
Part of the logic (handler-based) I implemented using C modules.
Related
This is a very general question and it may sound sketchy at first. As a full stack javascript developer, I come acros the same problem again and again concerning the matching between the calls the frontend makes to the backend, and the route handler the backend uses for each request. So the code looks like this:
// frontend
onTrigger(event).then(some logic).then(api GET '/api/endpoint').then(do something with the response)
// backend
route.get('/api/endpoint').businessLogic().respond(data)
I want a way to unify all this thing and as a result to have something like this:
onTrigger(event).then(some logic).unify(api GET '/api/endpoint').businessLogic().then(do something with the response)
I know at first it may sound terrible, but this way a developer has full control over the whole process of what's going on with the application. And just before building the project, a transpiler could check for all the "unify" methods and build all the requests / api calls for the frontend, and all the route handlers for the backend. I mean there is no reason to have such an overhead code over the libraries that make the requests on FE and the routers that handle these requests. I think it would be really helpful if you could only have access to what matters which is just the business logic of the application. Also, it would be really convenient because the application's Data Transfer Object and the Errors could be shared among frontend and backend.
For example,
// Current situation
// frontent
onTrigger(event).then(fetch data).catch(http error).then(do something with data)
// backend
route.apiMethod().authorize().then(respond).catch(return new UnauthorizedError())
// proposed unification
onTrigger(event).unify(fetch data).then(do something with data).catch(UnauthorizedError).catch(a general error)
With the proposed solution, both data model and UnauthorizedError can be shared seamlessly between FE/BE. This way less mistakes will be done from a developer concerning the DTO's and the error messages between FE and BE.
So my question is if there is any actual value to what I am proposing and how hard would it be to create a "transpiler" which will go through the code to find all the "unify" methods and then build the boilerplate code needed for FE and BE ?
I am beginner in MEAN stack.
When invoking unauthenticated REST API (no user log-in required), API end-points are exposed in the JS files. Looked through forums that, there is no way to prevent abusers using the API end-point directly or even creating their own web/app using those end-points. So my question is, are there any way to avoid exposing the end-points in the JS files?
On a similar note, are there any ways, not to use REST calls on front-end and still be able to serve those dynamic content/API output data in a MEAN stack? I use EJS.
There's no truly secure way to do this. You can render the page on the server instead, so the client only sees HTML (and some limited JS).
First, if you don't enable CORS, your AJAX calls are protected by the browser, i.e. only pages served from domain A can make AJAX calls to domain A.
Public API providers like Google Maps protect themselves by making you use an API key that they link to a Google account.
That key is still visible in the JS, but - if abused - can be easily disabled.
There's also pseudo-security through obfuscation, i.e. make it harder for an attacker to extract a common secret you are using the encrypt the API interaction.
Tools like http://javascript2img.com/ are far from perfect, but makes attackers spend a lot of time trying to figure out what your code does.
Often that is enough.
There are also various ways to download JS code on demand which can then check if the DOM it runs in is yours.
I made a POST request to a Sinatra app. I noticed that the parameters arrive in the server as a StringIO. It can be read using request.body.read. However, it can only be read once. To read it again, I need to run request.body.rewind (haha, Sinatra).
Why is it designed this way? I can see this being useful in streaming data but are there other applications?
Parameters are available within Sinatra via the params hash. request.body.read and request.body.rewind are part of Rack, they are not actually implemented within Sinatra. The most common way I have used this in the past is when I'm using Sinatra strictly as a web API and serializing/de-serializing my payload.
in the official trigger.io docs there seems to be no provision for custom http headers when it comes to the forge.file module. I need this so I can download files behind an http authentication scheme. This seems like an easy thing to add, if support is not already there.
any workarounds? any chance of a quick fix in the next update? I know I could use forge.request instead, but I'd like to keep a local copy (saveURL).
thanks
Unfortunately the file module just uses simple "download url" methods rather than a full HTTP request library, which makes it a fairly big task to add support for custom headers.
I've added a task to our backlog for this, but I don't have a timeframe for it being added.
Currently on iOS you can do basic auth by using urls in the form http://user:password#url.com in case that helps.
Maybe to avoid this you can configure your server differently, or have a proxy server in front that allows you to pass authentication details as get parameters?
I'm trying to develop AngularJS applicatino using the Angular tutorial web-server script.
Is it possible or smart to use it for development only scenario ?
I want to be able to develop and test my Angular application without relying on the real server and real database, that's the reason I'm asking this.
I don't know much about the tutorial web-server script.
When it comes to your situation, though, your best bet is to abstract away your data managing processes. In other words, you can make a set of services that take care of loading and saving your data. You could have methods like book.save() or book.fetch().
Then in save() and fetch() you can return or insert an object literal or call for a JSON file.
Assuming that your product will be running on JSON data, you should be able to write another set of model services that call JSON data from the server rather any that you've hard written in the code or in a *.json file.