We're designing a system for a client where they are allowing authenticated users to upload images. We've created an API to upload the files but the client only wants the latest file and delete all previous ones so that there would only ever be one.
We've looked through the docs and can't come across a way for ADAM to handle this in both 2SXC and DNN's file system.
Internally when deleting images we see API calls like the following to the internal 2SXC API, but we're wondering if this is exposed somewhere within the public API?
https://somedomain.com/api/2sxc/app/auto/data/61393528-b401-411f-a001-f423ea46700a/b7d04e2c-c565-496c-8efb-aa133cf90d33/Photo/delete?subfolder=&isFolder=false&id=189&usePortalRoot=false&appId=3
We could probably use the same endpoint above, but we'd likely run into permission issues or changes to the APIs that could be problematic.
Thank you for any advice you can give! Perhaps #iJungleBoy can provide some thoughts on this.
As a solution from a completely different direction, if you are on the later release of 2sxc (v12.8+, v13+), and comfortable programming in C#, you might consider doing this as a "cleanup" from a Dnn Scheduled Task. This can be done with a relatively easy setup. We have a Gist in place that we use as a starter. You simply put the code in the /App_Code folder then setup a normal Dnn Scheduled Task. NOTE that you can scroll down to the first comment on the Gist to see a screenshot of a complete working setup.
Accuraty's AccuTasks template on GitHub Gists
There are two more key things to note:
You need to install Dnn's CodeDom 3.6 because the example uses the later versions C#'s string interpolation - OR remove the few $"ASL2021 - {this.GetType().Name}, Task Scheduled Email", bits or convert to string.Format() or something.
Since your task's code is NOT running in a (2sxc) module, if needed, you'll do stuff like this: 2sxc Docs - Use 2sxc Instance or App Data from External C# Code
So, if you are comfortable writing code that "finds and deletes stuff older than NN days" - this might be the way to go.
I tried from gmail.com UI, but i didn't found any way to create nested folders under system labels.
Then i tried using APIs and it's not possible from there also.
But i am not able to find any documentation where this behaviour is specified. Am i missing something or doing something wrong.
I am using gmail apis to create labels. https://developers.google.com/gmail/api/v1/reference/users/labels/create
The problem is we have a customer who has nested labels under inbox. I think may be its old gmail feature and does not exists anymore. Can someone clear my understanding. Thanks in advance
It appears that USER labels cannot currently be nested under SYSTEM labels with the Gmail API. I'm not sure if they could have been at a point in the past using the Gmail API or another deprecated API. Finding information to indicate it was possible in the past may be difficult as the older "offical" APIs such as the Email Migration v2 API (and its predecessor) appear to have had there documentation taken down.
Maybe #Rubén is onto something here. Though I would assume that Google's servers in their current configuration would return some sort of error while attempting this from any client. It is probably deprecated functionality that has since been removed that made this possible. Maybe when using the old obsolete Email Migration v2 API this was possible. I unfortunately cannot recall/prove if it allowed said functionality.
I cannot find a link with information directly from Google. However, I have found the following:
ScottG_TC said: "You are creating a new Label, did you check off Nest Under and then the drop down will show you all of the user created lables. Those are the only ones you can nest under."
https://productforums.google.com/forum/#!topic/gmail/DqWSicdPTSs
ScottG_TC said: "You can not nest labels in Inbox."
https://productforums.google.com/forum/#!topic/inbox/78TdouDE0s4
Gmail API
Having used the .NET wrapper/version of the API I have noticed that I cannot create a nested label under the 'Inbox' SYSTEM label programmatically. Attempting to do so produces the same result as using the GUI. It creates a flat USER defined label named 'Inbox/Foo'. This appears to be the standard result of attempting to create a nested USER label under a SYSTEM label. That is, a flat USER defined label will be created independent of the intended SYSTEM label.
Gmail
Example using the Gmail UI itself.
And after creation.
As I'm sure you've already noticed in the UI using the "Nest label under:" has no option to specify any of the SYSTEM labels.
Creation of a USER defined label using the name of a SYSTEM label is also invalid.
"Duplicate Question"
Regarding this
I don't believe this is a duplicate question, the Gmail API has the concept of USER and SYSTEM labels. The provided duplicate question/answer only goes over the idea of creating a nested label under a USER defined label. It does not state whether or not a nested label can be created under a SYSTEM label such as 'Inbox'
Difference between USER and SYSTEM labels:
https://developers.google.com/gmail/api/guides/labels
The difference can also be observed by going to https://mail.google.com/mail/u/0/#settings/labels SYSTEM labels have there own section.
I was able to accomplish this by linking my gmail account to windows 10 "mail" program. Then used the windows mail UI to create my nested folders under my inbox then when I go back to gmail using chrome ::boom:: nested under my inbox.
I need to create some automated method for checking certain security settings within a given Salesforce org(s). The four big ones are:
IP Restrictions within each profile
Mobile User setting disabled
Mobile Lite disabled
Chatter Disabled
I think the first two can be accomplished through the API (SOQL to get all profiles and check loginIpRanges[] length >0 and SOQL to get all users and check isMobileUser property for each one), but I can't find anything in the API for the other two and wonder if I would have to screen scrape it.
Any suggestions on the best approach to accomplish this? A local Python or other script that connects remotely via the API and a screen scraper or Selenium script for the non-API items? An Apex or VisualForce page that is installed within each org?
I am new to Salesforce and Apex, so before I start down one road and doing it within Salesforce vs via the API I would really appreciate any guidance.
Thank you!
I think you'll have to take a mixed approach to solving this, perhaps wrapped up in some larger python script.
Use the metadata API to get all of the Profile objects and parse for loginIPRanges. You can use Apache ANT and the Force.com migration tool commands to do this. You can also get the SecuritySettings from the same API and method and get a lot of the things in the Security Health Check, if you need them. The results will be returned in XML, which you can easily parse in your python script.
Use the API and a SOQL query to check for the isMobileUser permission, use python to parse/output results. Beatbox is a good library for connecting to the standard API.
For the last two, I think you'll need to go with some screen scraping/browser automation and parsing. Hopefully someone has a better answer for this, as I'm not familiar enough to help with how to accomplish this aspect. The screens are in standard locations so it should be repeatable as long as future updates don't move things.
Ideally you'll be able to combine these into one large script that fires off beatbox, then fires off ant/migration tool, and some browser automation script.
I would like to extend my WinForms app, which a feature that allows me to monitor which functions are used by the users.
The idea is to count how many times e.g. a button has been clicked, or a popup was opened.
I want to know which features are used more or less often by the users.
Any ideas how this can be done? (Or even if somebody solved this problem already)
tia,
Martin
The only mechanism I can think on to do what your looking for is to use a logger like log4net / Log4PostSharp to log details to a log file on the machine, this would give you details on usage for that particular client. You would have to create a custom attribute that you could decorate your methods with that would result in something being written out to the log file, otherwise your code would end up littered with code to implement the logging!
Have a look at this article too, it uses Log4PostSharp with AOP (Aspect Oriented Programming) which would make the implementation of the logging much more cleaner (uses attributes).
http://www.codeproject.com/KB/dotnet/log4postsharp-intro.aspx
You can find some if you google for the term "application analytics" instead of "feature tracking".
I have found the following products:
includeapp.com
Software Statistics Service
Dotfuscator for .NET, DashO for Java
FusionAnalytics
Flurry Analytics
OpenSpan Desktop Analytics
DeskMetrics
EQATEC Analytics
Rapidengines
I might say that I also plan to create such a product. When it will be Beta I will add it to the list.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I hear people writing these programs all the time and I know what they do, but how do they actually do it? I'm looking for general concepts.
Technically, screenscraping is any program that grabs the display data of another program and ingests it for it's own use.
Quite often, screenscaping refers to a web client that parses the HTML pages of targeted website to extract formatted data. This is done when a website does not offer an RSS feed or a REST API for accessing the data in a programmatic way.
One example of a library used for this purpose is Hpricot for Ruby, which is one of the better-architected HTML parsers used for screen scraping.
Lots of accurate answers here.
What nobody's said is don't do it!
Screen scraping is what you do when nobody's provided you with a reasonable machine-readable interface. It's hard to write, and brittle.
As an example, consider an RSS aggregator, then consider code that gets the same information by working through a normal human-oriented blog interface. Which one breaks when the blogger decides to change their layout?
Of course, sometimes you have no choice :(
In general a screen scraper is a program that captures output from a server program by mimicing the actions of a person sitting in front of the workstation using a browser or terminal access program. at certain key points the program would interpret the output and then take an action or extract certain amounts of information from the output.
Originally this was done with character/terminal outputs from mainframes for extracting data or updating systems that were archaic or not directly accessible to the end user. in modern terms it usually means parsing the output from an HTTP request to extract data or to take some other action. with the advent of web services this sort of thing should have died away, but not all apps provide a nice api to interact with.
A screen scraper downloads the html page, and pulls out the data interested either by searching for known tokens or parsing it as XML or some such.
In the early days of PC's, screen scrapers would emulate a terminal (e.g. IBM 3270) and pretend to be a user in order to interactively extract, update information on the mainframe. In more recent times, the concept is applied to any application that provides an interface via web pages.
With emergence of SOA, screenscraping is a convenient way in which to services enable applications that aren't. In those cases, the web page scraping is the more common approach taken.
Here's a tiny bit of screen scraping implemented in Javascript, using jQuery (not a common choice, mind you, since scraping is usually a client-server activity):
//Show My SO Reputation Score
var repval = $('span.reputation-score:first'); alert('StackOverflow User "' + repval.prev().attr('href').split('/').pop() + '" has (' + repval.html() + ') Reputation Points.');
If you run Firebug, copy the above code and paste it into the Console and see it in action right here on this Question page.
If SO changes the DOM structure / element class names / URI path conventions, all bets are off and it may not work any longer - that's the usual risk in screen scraping endeavors where there is no contract/understanding between parties (the scraper and the scrapee [yes I just invented a word]).
Technically, screenscraping is any program that grabs the display data of another program and ingests it for it's own use.In the early days of PC's, screen scrapers would emulate a terminal (e.g. IBM 3270) and pretend to be a user in order to interactively extract, update information on the mainframe. In more recent times, the concept is applied to any application that provides an interface via web pages.
With emergence of SOA, screenscraping is a convenient way in which to services enable applications that aren't. In those cases, the web page scraping is the more common approach taken.
Quite often, screenscaping refers to a web client that parses the HTML pages of targeted website to extract formatted data. This is done when a website does not offer an RSS feed or a REST API for accessing the data in a programmatic way.
Typically You have an HTML page that contains some data you want. What you do is you write a program that will fetch that web page and attempt to extract that data. This can be done with XML parsers, but for simple applications I prefer to use regular expressions to match a specific spot in the HTML and extract the necessary data. Sometimes it can be tricky to create a good regular expression, though, because the surrounding HTML appears multiple times in the document. You always want to match a unique item as close as you can to the data you need.
Screen scraping is what you do when nobody's provided you with a reasonable machine-readable interface. It's hard to write, and brittle.
As an example, consider an RSS aggregator, then consider code that gets the same information by working through a normal human-oriented blog interface. Which one breaks when the blogger decides to change their layout.
One example of a library used for this purpose is Hpricot for Ruby, which is one of the better-architected HTML parsers used for screen scraping.
You have an HTML page that contains some data you want. What you do is you write a program that will fetch that web page and attempt to extract that data. This can be done with XML parsers, but for simple applications I prefer to use regular expressions to match a specific spot in the HTML and extract the necessary data. Sometimes it can be tricky to create a good regular expression, though, because the surrounding HTML appears multiple times in the document. You always want to match a unique item as close as you can to the data you need.
Screen scraping is what you do when nobody's provided you with a reasonable machine-readable interface. It's hard to write, and brittle.
Not quite true. I don't think I'm exaggerating when I say that most developers do not have enough experience to write decents APIs. I've worked with screen scraping companies and often the APIs are so problematic (ranging from cryptic errors to bad results) and often don't give the full functionality that the website provides that it can be better to screen scrape (web scrape if you will). The extranet/website portals are used my more customers/brokers than API clients and thus are better supported. In big companies changes to extranet portals etc.. are infrequent, usually because it was originally outsourced and now its just maintained. I refer more to screen scraping where the output is tailored, e.g. a flight on particular route and time, an insurance quote, a shipping quote etc..
In terms of doing it, it can be as simple as web client to pull the page contents into a string and using a series of regular expressions to extract the information you want.
string pageContents = new WebClient("www.stackoverflow.com").DownloadString();
int numberOfPosts = // regex match
Obviously in a large scale environment you'd be writing more robust code than the above.
A screen scraper downloads the html
page, and pulls out the data
interested either by searching for
known tokens or parsing it as XML or
some such.
That is cleaner approach than regex... in theory.., however in practice its not quite as easy, given that most documents will need normalized to XHTML before you can XPath through it, in the end we found the fine tuned regular expressions were more practical.