I need to create some automated method for checking certain security settings within a given Salesforce org(s). The four big ones are:
IP Restrictions within each profile
Mobile User setting disabled
Mobile Lite disabled
Chatter Disabled
I think the first two can be accomplished through the API (SOQL to get all profiles and check loginIpRanges[] length >0 and SOQL to get all users and check isMobileUser property for each one), but I can't find anything in the API for the other two and wonder if I would have to screen scrape it.
Any suggestions on the best approach to accomplish this? A local Python or other script that connects remotely via the API and a screen scraper or Selenium script for the non-API items? An Apex or VisualForce page that is installed within each org?
I am new to Salesforce and Apex, so before I start down one road and doing it within Salesforce vs via the API I would really appreciate any guidance.
Thank you!
I think you'll have to take a mixed approach to solving this, perhaps wrapped up in some larger python script.
Use the metadata API to get all of the Profile objects and parse for loginIPRanges. You can use Apache ANT and the Force.com migration tool commands to do this. You can also get the SecuritySettings from the same API and method and get a lot of the things in the Security Health Check, if you need them. The results will be returned in XML, which you can easily parse in your python script.
Use the API and a SOQL query to check for the isMobileUser permission, use python to parse/output results. Beatbox is a good library for connecting to the standard API.
For the last two, I think you'll need to go with some screen scraping/browser automation and parsing. Hopefully someone has a better answer for this, as I'm not familiar enough to help with how to accomplish this aspect. The screens are in standard locations so it should be repeatable as long as future updates don't move things.
Ideally you'll be able to combine these into one large script that fires off beatbox, then fires off ant/migration tool, and some browser automation script.
Related
this is my first task of detecting users' geo locations and I am a fairly new dev.
The app uses React and backend is node.js.
Currently we have some functions that calls an api which returns users' locations.( this takes a while)
But, two other options right now is use:
Geolocation API <--- this might need users' permission?
Fastly
For Fastly, I am asking
Does it work with non server side rendering app?
For production site, we have fastly set up in route53. but need to ask devops for staging environment. ( I got this info from others but do not know what that means )
Can someone even explains to me how fastly work and what needs to be set up?
Basically any information is appreciated. I do not know what should be googled to find out the answers.
Thanks.
If you have Fastly fronting your app, then YES you can definitely use Fastly to provide geolocation information.
Just to be clear (as you mentioned you were unfamiliar with Fastly and more generally are a "new dev"), when I say "fronting your app" I mean: when a client (e.g. a user's web browser) makes a request for https://yourapp.com/, does the request first get routed through Fastly? If it does, then Fastly will proxy the request through to your app and any data you send back through Fastly to the client will likely be cached to make future requests for all your users much quicker (this is one of the many functions Fastly provides).
Fastly has lots of products, but for your primary purposes there are two platform services Fastly offers:
Content Delivery (CDN) which is built on Varnish/VCL (if your ops team already has Fastly setup then this is likely what they have).
Compute#Edge which is built upon WebAssembly.
I would highly recommend reading the following resources to understand more about the Fastly platform options:
Content Delivery with VCL
Content Delivery with Compute#Edge
As far as using Fastly to handle geolocation information, I'll point you to the following resources:
https://developer.fastly.com/solutions/examples/geo-ip-api-at-the-edge
https://developer.fastly.com/solutions/examples/decorating-origin-requests-with-geoip
Also search the following page for references to "geolocation" as there are quite a few 'examples' that you might be interested in:
https://developer.fastly.com/solutions/examples/
I would also suggest having a play around with https://fiddle.fastly.dev which let's you use either VCL or any of the supported Compute#Edge languages to test out ideas without needing to have a real Fastly service setup. This will give you a chance to trial out some geolocation code.
Lastly, you can also have a read through the first half of https://www.integralist.co.uk/posts/fastly-varnish/ which covers some basics about Fastly's use of Varnish/VCL (but I'd suggest reading the official references, linked above, first).
Any other questions, then please feel free to reach out to support#fastly.com who will be happy to help.
We're designing a system for a client where they are allowing authenticated users to upload images. We've created an API to upload the files but the client only wants the latest file and delete all previous ones so that there would only ever be one.
We've looked through the docs and can't come across a way for ADAM to handle this in both 2SXC and DNN's file system.
Internally when deleting images we see API calls like the following to the internal 2SXC API, but we're wondering if this is exposed somewhere within the public API?
https://somedomain.com/api/2sxc/app/auto/data/61393528-b401-411f-a001-f423ea46700a/b7d04e2c-c565-496c-8efb-aa133cf90d33/Photo/delete?subfolder=&isFolder=false&id=189&usePortalRoot=false&appId=3
We could probably use the same endpoint above, but we'd likely run into permission issues or changes to the APIs that could be problematic.
Thank you for any advice you can give! Perhaps #iJungleBoy can provide some thoughts on this.
As a solution from a completely different direction, if you are on the later release of 2sxc (v12.8+, v13+), and comfortable programming in C#, you might consider doing this as a "cleanup" from a Dnn Scheduled Task. This can be done with a relatively easy setup. We have a Gist in place that we use as a starter. You simply put the code in the /App_Code folder then setup a normal Dnn Scheduled Task. NOTE that you can scroll down to the first comment on the Gist to see a screenshot of a complete working setup.
Accuraty's AccuTasks template on GitHub Gists
There are two more key things to note:
You need to install Dnn's CodeDom 3.6 because the example uses the later versions C#'s string interpolation - OR remove the few $"ASL2021 - {this.GetType().Name}, Task Scheduled Email", bits or convert to string.Format() or something.
Since your task's code is NOT running in a (2sxc) module, if needed, you'll do stuff like this: 2sxc Docs - Use 2sxc Instance or App Data from External C# Code
So, if you are comfortable writing code that "finds and deletes stuff older than NN days" - this might be the way to go.
I would like to (programmatically) convert a text file with questions to a Google form. I want to specify the questions and the questiontypes and their options. Example: the questiontype scale should go from 1 to 7 and should have the label 'not important' for 1 and 'very important' for 7.
I was looking into the Google Spreadsheet API but did not see a solution.
(The Google form API at http://code.lancepollard.com/introducing-the-google-form-api is not an answer to this question)
Google released API for this: https://developers.google.com/apps-script/reference/forms/
This service allows scripts to create, access, and modify Google Forms.
Until Google satisfies this feature request (star the feature on Google's site if you want to vote for it), you could try a non-API approach.
iMacros allows you to record, modify and play back macros that control your web browser. My experiments with Google Drive showed that the basic version (without DirectScreen technology) doesn't record macros properly. I tried it with both the plugin for IE (basic and advanced click mode) and Chrome (the latter has limited iMacro support). FYI, I was able to get iMacros IE plug-in to create questions on mentimeter.com, but the macro recorder gets some input fields wrong (which requires hacking of the macro, double-checking the ATTR= of the TAG commands with the 'Inspect element' feature of Chrome, for example).
Assuming that you can get the TAG commands to produce clicks in the right places in Google Drive, the approach is that you basically write (ideally record) a macro, going through the steps you need to create the form as you would using a browser. Then the macro can be edited (you can use variables in iMacros, get the question/questiontype data from a CSV or user-input dialogs, etc.). Looping in iMacros is crude, however. There's no EOF for a CSV (you basically have to know how many lines are in the file and hard-code the loop in your macro).
There's a way to integrate iMacro calls with VB, etc., but I'm not sure if it's possible with the free versions. There's another angle where you generate code (Javascript) from a macro, and then modify it from there.
Of course, all of these things are more fragile than an API approach long-term. Google could change its presentation layer and it will break your macros.
Seems like Apps Script now has a REST API and SDK's for it. Through Apps Script you can generate Google Forms. This API was really hard to find by trying to google for it and I haven't yet tested it myself, but I am going to build something with it today (hopefully). So far everything looks good.
EDIT: Seems like the REST API I am using works very well for fully automated usage.
In March(2022) google released REST API for google form. API allows basic crud operation & also added support for registering watches on the form to notify whenever either form is updated or a new response is received.
As of now (March 2016), Google Forms APIs allow us to create forms and store them in Google Drive. However, Forms APIs do not allow one programmatically modify the form (such as modify content, add or delete questions, pre-filled data, etc). In other words, the form is static. In order to serve custom, external APIs are needed.
I'm stumped and need some ideas on how to do this or even whether it can be done at all.
I have a client who would like to build a website tailored to English-speaking travelers in a specific country (Thailand, in this case). The different modes of transportation (bus & train) have good web sites for providing their respective information. And both are very static in terms of the data they present (the schedules rarely change). Here's one of the sites I would need to get info from: train schedules The client wants to provide users the ability to search for a beginning and end location and determine, using the external website's information, how they can best get there, being provided a route with schedule times for the different modes of chosen transport.
Now, in my limited experience, I would think the way to do that would be to retrieve the original schedule info from the external site's server (via API or some other means) and retain the info in a database, which can be queried as needed. Our first thought was to contact the respective authorities to determine how/if this can be done, but this has proven to be problematic due to the language barrier, mainly.
My client suggested what is basically "screen scraping", but that sounds like it would be complicated at best, downloading the web page(s) and filtering through the HTML for relevant/necessary data to put into the database. My worry is that the info on these mainly static sites is so static, that the data isn't even kept in a database to build the page and the web page itself is updated (hard-coded) when something changes.
I could really use some help and suggestions here. Thanks!
Screen scraping is always problematic IMO as you are at the mercy of the person who wrote the page. If the content is static, then I think it would be easier to copy the data manually to your database. If you wanted to keep up to date with changes, you could then snapshot the page when you transcribe the info and run a job to periodically check whether the page has changed from the snapshot. When it does, it sends an email for you to update it.
The above method could also be used in conjunction with some sort of screen scaper which could fall back to a manual process if the page changes too drastically.
Ultimately, it is a case of how much effort (cost) is your client willing to bear for accuracy
I have done this for the following site: http://www.buscatchers.com/ so it's definitely more than doable! A key feature of a web scraping solution for travel sites is that it must send you emails if anything went wrong during the scraping process. On the site, I use a two day window so that I have two days to fix the code if the design changes. Only once or twice have I had to change my code, and it's very easy to do.
As for some examples. There is some simplified source code here: http://www.buscatchers.com/about/guide. The full source code for the project is here: https://github.com/nicodjimenez/bus_catchers. This should give you some ideas on how to get started.
I can tell that the data is dynamic, it's to well structured. It's not hard for someone who is familiar with xpath to scrape this site.
I would like to extend my WinForms app, which a feature that allows me to monitor which functions are used by the users.
The idea is to count how many times e.g. a button has been clicked, or a popup was opened.
I want to know which features are used more or less often by the users.
Any ideas how this can be done? (Or even if somebody solved this problem already)
tia,
Martin
The only mechanism I can think on to do what your looking for is to use a logger like log4net / Log4PostSharp to log details to a log file on the machine, this would give you details on usage for that particular client. You would have to create a custom attribute that you could decorate your methods with that would result in something being written out to the log file, otherwise your code would end up littered with code to implement the logging!
Have a look at this article too, it uses Log4PostSharp with AOP (Aspect Oriented Programming) which would make the implementation of the logging much more cleaner (uses attributes).
http://www.codeproject.com/KB/dotnet/log4postsharp-intro.aspx
You can find some if you google for the term "application analytics" instead of "feature tracking".
I have found the following products:
includeapp.com
Software Statistics Service
Dotfuscator for .NET, DashO for Java
FusionAnalytics
Flurry Analytics
OpenSpan Desktop Analytics
DeskMetrics
EQATEC Analytics
Rapidengines
I might say that I also plan to create such a product. When it will be Beta I will add it to the list.