Browser Extension: Lowest possible permission to get current tabs URL - firefox-addon-webextensions

A browser extension, I've had previously published on the Google webstore has been taken down due permissions. The reasoning was to reduce unneeded permissions - I've removed all unneeded permissions. Now only activeTab is used in the manifest.json. The updated extension still wasn't accepted with the following message:
Your Product violates the “Use of Permissions” section of the policy, which requires that you:
Request access to the narrowest permissions necessary to implement your Product’s features or services. If more than one permission could be used to implement a feature, you must request those with the least access to data or functionality.
[...]
To reinstate your Product, please ensure that your Product requests and uses only those permissions that are necessary to deliver the currently stated product’s features.
My questions I'm hoping to get an answer to:
Is there a simpler way to get the currently active tabs URL?
Is there an hierarchy of permissions someone to check which one's are considered higher value?
Thanks for your help

Related

Enhanced Domains - What to Check in your Org?

As you know Salesforce is enforcing Enhanced Domains. I found from Salesforce help that:
Custom components in your org must be evaluated in order to check
whether they use domain name/static URLs
Some embedded content stored in Salesforce might no longer appear
Third-party applications can lose access to your data
Single sign-on integrations can fail
However, I'm struggling with finding out which particular Salesforce elements/configurations should be checked in order to detect potential gaps? Do you know which areas exactly can be affected and shall be evaluated (like Apex Codes, Email Templates and so on)? Is there any guide on that?
Your biggest concern should be inbound integrations. Things that log in over REST/SOAP API, get response with session id back, ignore the "url to use for all subsequent requests" and just use hardcoded url, whether it's prod or sandbox.
Look at this guy, he's victim of either enhanced domain or "disable api versions < 30" thing: The requested resource no longer exists with rest PHP. Look at these guys, they had hardcoded url: how to solve python code error (TooManyRedirects: Exceeded 30 redirects), Salesforce API via postman error INVALID_SESSION_ID.
As for stuff inside Salesforce itself - best would be to download whole project with sfdx and run a text search for your domain name (and site/community name if you have these). Email templates that use merge fields for forgot password etc should be fine, merge fields with record link should be fine... But if you manually craft email body in apex - might be a problem. A lot depends how creative the developer was. If you find getsalesforcebaseurl().toexternalform() it should still work. If it's hardcoded / read from custom setting / custom label / custom metadata it might be more fun.
If you have external apps that display pieces of salesforce (embedded live chat? some iframe with FAQ? CMS Connect) - the domain change might mean they need to be updated, both in terms of updating url and changing security rules (CSP for example)

Azure AD provisioning requires two runs to succeed with custom app

We've created an application using SCIM 2 SDK from PingIdentity for provisioning with Azure AD. Custom mapping is set up and working.
However, when the user is CREATED, all of the fields are included in the import, but only a few fields are included in the provisioning step and sent to our application. Provisioning needs to run a second time on that user to UPDATE in order for all the fields to be included. Amongst other things, this means that first and last name are not split and it only sends the displayname (which ends up as firstname on our end).
For some users in normal provisioning, it can take days between the create and update runs so we're missing data for a long time.
Anyone know how can we can test for what's causing this and solve it so all the fields are included in the initial CREATE run for a user?
Here are the attribute mapping settings: https://imgur.com/ypfAAmD
And an example log of when the user is created with only basic fields: https://imgur.com/iOXACJh
vs. when the user is updated with all the other fields: https://imgur.com/UqDNyCv
I'm a product manager at Microsoft that works on the provisioning service and our SCIM client.
The behavior you're seeing occurs when you have attributes that are not part of the SCIM core schema included as "short" names. Attributes not defined in the SCIM core schema (RFC 7643) should have full URN syntax. Something to the effect of urn:ietf:params:scim:schemas:extension:appName:2.0:User:attributeName is commonly used by other implementations. The shaky behavior you're seeing where the AAD provisioning service fails to send these attribute values via a POST but later includes them in a PATCH comes down to different code paths in the AAD provisioning service, and the PATCH code happens to handle this differently than the POST code. This is purely by chance, however, and isn't an intentional design choice. At some future point I'm hoping we'll make this more consistent and disallow incorrectly structured attribute names entirely.
If you adjust your attribute names to align with the guidance in the SCIM spec's schema RFC and provide the attributes with fully defined URNs, you should see consistent behavior that works on both POST and PATCH.

Is it possible for a webextension/addon to request permission for a specific website at runtime?

I am trying to develop a WebExtension that accesses a user-defined API whose URL I do not know in advance. (More specifically, it manages their Ghost publications, whose APIs would be hosted on the same domains as the publications themselves). Hence, I need to let users enter their API URL and access that throughout the addon.
The simplest way to do this would be to request the <all_urls> host permission (basically like *://**). But instead of such sweeping permissions, I was wondering if there's a more finegrained way of requesting permission just for the specific URL I need?
I know that the optional_permissions setting lets an addon request additional permissions at runtime if those permissions are specified in advance in the manifest. From this w3cub page:
Type
Mandatory
Example
Array
No
"optional_permissions": ["*://developer.mozilla.org/*", "webRequest"]
Use the optional_permissions key to list permissions that you want to ask for at runtime, after your extension has been installed.
However, this seems to require a hard-coded permission (*://developer.mozilla.org/*) that's requested at runtime. What I need is a user-defined permission that can be requested in the same manner. Is there any way I can go about implementing this?
From the list of use-cases for optional permissions on the Mozilla docs:
The extension may need host permissions, but not know at install time which host permissions it needs. For example, the list of hosts may be a user setting. In this scenario, asking for a more specific range of hosts at runtime, can be an alternative to asking for "<all_urls>" at install time.
The way to do it is to include <all_urls> in the optional_permissions setting of your manifest.json like this:
"optional_permissions": [
"<all_urls>",
"webRequest",
"geoLocation"
],
In this example, we're also requesting the webRequest and geoLocation permissions; this is to demonstrate the two different kinds of permission you can request.
Then, in your code, instead of requesting <all_urls>, just request the URLs that you actually want to access:
browser.permissions.request({
permissions: ["webRequest", "geoLocation"],
origins: ["https://example.com/*"]
})
As you can see, the request is neatly separated into an option for more API permissions and another for the different websites (origins) you want to access.
A more complete example can be found on the docs for permissions.request docs. (You can also find a working sample webextension here, but it doesn't include host permissions, only API ones).

Best practices for similar RBAC schemas?

I all, I'm writing a boilerplate for future projects. Composition is as follows:
Server:
Express,
Prisma 2,
Typescript,
JWT Auth (Access token in memory, Refresh in cookie)
MySQL
I'm writing an RBAC schema, and have successfully written express middlewares to determine if a user is logged in, and for if a user has a specific permission on their role.
If you've ever used any of the minecraft server permission plugins, I'm trying to emulate the common pattern used there.
Users have role(s)
Roles have permissions
Roles can inherit permissions from one or more roles
Roles have a "nextRole" field to determine what role to give when the "promote" event is triggered.
Everything works fine on the server side.
What I'm wondering about is, how should I go about copying the middlewares (login, permissions) to the client side, and how should I determine whether a user has permission to do something?
What I've looked at:
Creating a "hasPermission" endpoint wouldn't be very good as I'd need to make an API call every time a permission check is needed.
Eager loading all roles and permissions from the api when logging in and returning them in the response (I can't eager load the recursive role inheritance/nextRole as far as I know)
Returning ONLY the user without roles and permissions for the JWT/login bit and getting roles/permissions from their own endpoints (again, needs to be recursive to get all inheritance and said permissions from inheritance)
Has anyone created an RBAC schema like this, and how did you go about checking permissions on the client side without being too redundant/using too much memory/too many api calls?
This is a good question, here's my answer to it.
An app is normally protected by the auth info, which means it could be blocked if a user is not permitted. If this is a server application, it can be easily done, because the session can be used to find out the current user info including roles.
However if this is a client app, it's a bit tricky. Say we can protect a route (page or section of page) once the user log in.
if (!user.authenticated) return null
We can use the above line to block private or public user. Or other information you can grab from the user to protect more.
if (user.role !== 'Admin') return null
We could wrap in these into a component, such as
<Allow role="admin" render={...} />
I believe you get the point. However there's something which is very unique about the client approach. The entire user info is returned back, and only the user info, not the user type or permission type.
So to follow your plan, do we need to share a permission or role type to the client side? This is a million dollar question.
In practice, the UI never needs the complete info, why? because UI normally reshapes the permission a bit. That doesn't mean you can't share the complete info from the backend. Doing that may make the UI job easy or more complicated. Nobody knows.
The reason is what I explained above, the UI is writing a if statement (could be hidden) anyway. Either this if is true or false, most of the front-end code is already loaded. It's very different than the backend version, which can entirely block the deliver of the content.

ADFS not issuing claims from custom attribute stores to some users

I'm trying to work out why some of our users aren't issued claims by our custom attribute stores.
Our main attribute store for authentication is Active Directory, but we are using two custom attribute stores to issue several custom claims to users, and also to perform some logging of claims issued. When an affected user logs in, they are authenticated successfully by AD, but have no more claims added. According to the logging in our attribute stores, the BeginExecuteQuery is never called.
I can't see anything to link the affected users, but they mainly seem to be new users, or users that have not logged into the system in a long time. Restarting ADFS sometimes clears the problem, but whether it does or not seems to be random.
I'm trying to understand why an attribute store would be ignored by ADFS on logon for certain users, when it works fine for others. If there is a quick guaranteed temporary fix to get users' claims issued correctly, that would be useful too!
For security reasons, I don't have access to the ADFS Debug tracing.
This was eventually solved with a longs string of calls to Microsoft's AD FS support team. The problem was traced to a piece of our claims rule language which was using the lastLogon and lastLogonTimestamp AD attributes without understanding how they actually behaved. This meant that for some users the condition to grant the custom claims was never met.

Resources