Titlte says it all I think.
I have encountered both and don't know which one I should use between $_SESSION['Auth'] and $this->Authentication->getIdentity(). Is one safer than the other ?
Thank you,
Simon
With CakePHP you should always use the abstracted APIs to access any superglobal data like $_POST, $_COOKIE, $_SESSION, etc..
This is advised for a multitude of reasons that depend on the specific situation, but generally it kinda touches the principle of dependency inversion, and decoupling in general, eg. your code should depend on abstractions, not concretes, then for example implementations can change without breaking your application. And while the session object, the request object, or the authentication component aren't interfaces, they still abstract the access to the underlying data (the concrete so to speak).
Something where the need for this would apply generally, would be testing, except for the CakePHP session object, which must write the data to the $_SESSION superglobal internally, other superglobals like $_GET, $_POST, $_COOKIE, etc. are not being populated if you use the API provided by CakePHP, instead the data is written into the request object, which exposes the data via its own API. So if you were for example to access $_POST directly in your code, and then pass POST data in a test like $this->post('/url', $postData), your code wouldn't see the data, as it would directly land in the request object instead of the $_POST superglobal.
As far as the authentication example specifically goes, the authentication middleware could have obtained the identity with data from who knows where, the session, cookies, tokens, etc., and likewise it could persist the identity anywhere, the session, cookies, etc, the inner layers of your application shouldn't have to care about such implementation details, they obtain the identity via the component, or from the request object, and that's it, they don't need to know anything else, then you can easily change how authentication is handled without breaking the rest of your application.
Related
So currently in the project we have a collection of documents that don't require authentication to be read. They are write/update protected, but everyone can read.
What we are trying to prevent is that someone looks at the firebase endpoints and somehow manages to scrape the entire collection in json format (if this is even possible). The data is public, but I want it only to be accessible from our website.
One of the solutions we could think of was SSR (we are already using Next.js), but implementing SSR just for this reason doesn't seem very enticing.
Any suggestions would be appreciated.
EDIT:
Let me rephrase a little bit.
From what you see in the network tab, is it possible to forge/create a request to Firestore and get the entire collection instead of just the 1 document that was intended?
The best solution in your case is SSR. I know, it could sound as not enticing, but well, let's reason on when we should use SSR, then. In your use case, there is an important requirement: security. I think this is already a strong enough reason to justify the usage of SSR.
Also, creating an ad hoc service account for the next.js app, and securing the data with custom rules that allow the read of your data only to that service account, would only improve the overall security level.
Last: reading the data server side should make your site work a little faster, even if it would be difficult to notice, because we are talking about milliseconds. Notice that your page, as it is now, will need to be loaded, before the request to Firebase could be sent. This is adding a small delay. If the data is loaded server side, the delay is not added.
is it possible to forge/create a request to Firestore and get the entire collection instead of just the 1 document that was intended?
If you want to limit what people can request from a collection, you're looking for security rules. The most common model there is some form of ownership-based access control or role-based access control, but both of those require some way of identifying the user. This could be anonymously (so without them entering credentials), but it'd still be a form of auth.
If you don't want to do that, you can still control how much data can be gotten through the API in one go. For example, if you in the security rules allow get but not list, the user can only request a document once they know its ID. Even if you allow list, you can control in rules what queries are allowed.
I think one approach could be writing a Cloud Function that retrieves this public data using the admin SDK. Then, you could set a rule that nobody can read those documents. This means that only your Cloud Function with the admin SDK will have access to those documents.
Finally, you could set up AppCheck for that specific Cloud Function, this way, you ensure that the request is coming from your client app only.
https://firebase.google.com/docs/app-check
I would like to be able to access data from the database within an Action Handler in Hasura. Is the best approach:
to make a GraphQL query to the API exposed by Hasura; or
use a client, like Prisma, to read from the db directly?
I'm not sure there is a best approach. Personally, I use the API exposed by Hasura in action handlers. I've chosen to do this because:
I like the API exposed by Hasura.
I like the access controls that Hasura layers over top of the DB (although you could just use the admin account, too).
The API can be used to generate TypeScript types, which I use in the action handler. This means that I can confidently change the database schema or Hasura API and then see where my other code fails.
I have thought many times about your question though, because there is something a little odd about calling back into Hasura to handle a Hasura action. But the reasons mentioned above are substantial gains in my opinion, so I've stuck with this approach.
I'm very new to REST API and frontend JS frameworks world, and I don't really understand how I can limit access for a frontend to specific pages, I don't really think I can, am I? I'll explain:
Usually, if I develop without REST API, I can use backend to determine if a user may access content(on some pages) and block it if needed, so there's no possible way to download(and view) whatever it might/could be presented on that page.
On the other hand, if I make REST API for the same pages, I can only limit the presented data(I will basically block any request to certain protected endpoint), but yet, the user still will be able to download the schema of the page(frontend part), even I will check if user can/can't view the page, still he will be able to download it and see it, because, well... I check it in frontend and all the logic to present the data is also in frontend(that user may see, even though through a code).
Am I getting this right, if not please explain it to me.
I am building a jsonapi for my website, and while looking at various frontend components I came across
https://github.com/dixieio/redux-json-api/tree/master/docs
Which seems to resolve the endpoint URL directly from the resource type
It is part of the spec/recommendations to have the endpoint resolved exactl by the resource type ? I remember reading comment explaining there isn't an actual type naming convention.
My API has several endpoints for the registration of different types of user
/registration/admin
/registration/customer
etc.
Those endpoint have different business logic associated, but they all return a user type object.
Is this a bad design to have several endpoints returning the same resource type ?
Should I make changes in my code to introduce additional type like registration/user ?
Or should I submit a patch to the library so it accepts custom endpoint URLs ?
I can't address specifically the framework you're using, but you have complete freedom to choose what your HTTP resources represent. If, for example, customers can be corporate and have associated invoices and sales history, but admins are only ever individuals and cannot transact, you could make a strong case for keeping the resources seperate.
One thing you should try to avoid is allowing limitations of software dictate your URI structure. If I were creating this API, and had decided that customers and admins were different types of object, I would have the registration form resources live at /admins/new and /customers/new which would make POST requests to the collection resources at /admins and /customers. I would not have a /registration* at all.
To address your invividual questions:
I don't understand what you mean by ‘returning a resource type’ — are you talking about the representations returned in the responses, or how the back-end functions are creating and returning instances of a class?
I would not add an additional super-type for all kinds of user. Either have one collection per type, or one type for all.
If after considering all of the above and choosing URIs you want, your software cannot handle the structure you have chosen, there are three options:
i) choose more capable software
ii) create a mapping between the incoming URI and the URI handed to your software. Apache mod_rewrite can do this for you
iii) as you suggest, make the software you are already using more capable
Choose the easiest option.
I have an issue in which I wonder if Restangular has support for. I have a UserModel which is part of my model layer. It may have custom attributes that the server doesn't have in it's model and also behavior. I'm not clear if I'm able to use my custom User model, send it to the backend and when it returns transform it back to the UserModel object of my model layer so I still have the custom attribute and methods.
Here's the plunker: http://plnkr.co/edit/IlYcSRuX3GPWmewxniuq?p=preview
Where do I handle the transformation? Do I add the methods in the config block or should I add it via adding a response interceptor? What about custom attributes that the server might not send back to me? I haven't run across any good examples of this.
The UserInfoCntrl controller sends the UserModel object into the contactInformationService in my example.
Some of this might be design choices, i.e. use what you think is best. However, a common pattern [citation needed ;)] would be to integrate the synchronization logic between client and server in the "model" service.
The UserModel service would then be responsible for providing the User object to the rest of the application, keeping it in sync with the server (perhaps via methods like save(), or perhaps automatically?). The service would then be the only module responsible for communicating with the server, at least when it comes to user objects. It can also automatically pull the user data from the server when instantiated.
The architecture feels very clean, at least to me.
I don't have any concrete examples that exactly suits your needs, but this authentication service by Fnakstad springs to mind. It maintains a object (actually a user object!) using $http and $cookieStore. Restangular is a bit more high-level than $http, but the self-contained service concept providing methods for manipulation and storing stands.