I make extensive use of the file.saveURL feature of trigger.io but I would like to know if its possible or maybe possible to add this functionality but to run a command to remove all stored items?
The app I am creating is customizable by the user and because of that they have the ability to "leave" the app in a since.
Doing so will clear out all localstorage and any downloaded items.
Currently I have a method that stores each reference in a localstorage and when I "leave" I loop over the items in the LC and remove each item. While this seems to work ok it does have querks at times. I'm just curious if it may be possible to add a simple remove all type function to the file save?
Seeing as we are responsible to the cleanup of these items it would be nice to simply remove all stored items
Currently, no - we don't save those files in a separate folder, which could have been cleared up in one go...
It's a change we could make in the future though if it proves a popular update to the file module!
Related
I am currently trying to improve my matomo skills.
I created Custom Dimensions and started tracking them like this:
_paq.push(['setCustomDimension',1,kategorie]);
_paq.push(['trackPageView']);
It worked.
After that I created a Goal and tried to create my own Plugin.
Now my Custom Dimensions suddenly aren't tracked anymore - Matomo shows me there are 0 Actions in the Visits, though I did several Actions.
I thought I might have destroyed something while creating my Plugin etc. so I deleted it, but my Custom Dimensions still aren't tracked.
Do you have any idea what my Problem might be?
How were you checking that tracking dimensions worked before? You mean data being in the reports or just parameter being added in the GET request?
First of all, check your Visitors/Visitor log report - the lack of those numbers in the summaries may be effect of archiving not being run in the meantime.
I'm not sure how creating a Plugin could relate to this, maybe you can tell more what you mean here?
What is always a good idea is running debug mode on the request, so you can get confirmation if dimension value is picked up properly. Sometimes it maybe issue of it's length - afaik custom dimension value is limited to 250 chars.
I have an Angular SPA app running on Azure and I'd like to implement a rich text editor similar to Medium.com. I'll be using some existing editor for this, but I have a problem with image files.
The problem
I would like my editor to have the ability to also add images inside content. The problem I'm having is when should I be uploading images to the server?
Possible solution 1: Immediate upload after they're selected
The good
saving content is quicker because all images are likely already uploaded
files get displayed right after they're uploaded from the server URL
The bad
files may have to be deleted on the server if users selects to cancel editing
files may get stranded on the server if user simply closes their browser window
Possible solution 2: Upload after save
The good
files get displayed immediately using FileAPI capabilities
no stranded server side files if editing is discarded in whatever way
The bad
saving of content may take longer as all images need to be uploaded at the moment of saving content
additional client-side to display images from local files
Question
I would like to implement Solution 1 because it provides more transparent user interface process and reacts quicker to edit saves => better UX. But how should I be managing stranded files? I could use a worker process that would delete stranded files from time to time, but I wonder whether that's the best approach for this scenario.
What and how would you suggest I implement this?
This is highly subjective (opinion based), but I will give it a shot.
You are actually having bigger problem than you think. In your abstracted approaches, you only describe a situation when user started something as new. Whereas I see much harder to solve issues if user is editing existing item. What will happen if he/she deletes images, adds new images and at the end hits CANCEL. And also, if Internet connection drops while creating / editing?
I would also go for Solution one. And, of course minimize the "bad" things, as they aren't really that much or hard to handle. Here is how I would solve all the "bad"s in Approach 1:
All my articles (or whatever user is editing with editor) will have a boolean flag "IsDraft" or something like this. Then all my business logic for front end will only look for items where IsDraft == False.
Whenever a user starts a new article (the easiest to solve problem) I immediately create new item in my DB with IsDraft=True
Have a link table to keep link between ID of item being created and Image Files being used (blobs). The point here is, that, if you do not keep links between used and unused blobs, you will have a lot headaches deciding which blob to delete and which one to leave on the storage.
Have a worker process (either as worker process in Web Role if I use Cloud Services, or as a Web Job (+ nice and short explanation here) is I use Web Sites) that checks for articles that are Draft and older than XXX days. If found - delete files + article itself.
Handling Editing of existing item is more challenging - for this, I might take the approach of:
Create a new copy of the entire article when user hits Edit and mark it as draft
If user hits save - switch the content of the new article (new version) with the existing one, leaving the new article marked as IsDraft - the worker process will clean it up.
If user doesn't hit save for some reason (hit cancel, or internet drops, or computer restarts, or browser crashes, or or or ..) - the new article will be cleaned later by the worker process
And if you want to go deeper and crazier, you can have a section in your admin panel where you show the Drafts to your users, so they can either continue work, or leave it to be auto cleaned.
I noticed that salesforce doesn't allow to override control function for all objects.
Say if you want to do something whenever objects get saved there is no way to attach the action
unless you create a custom page and include either standard controller or extension. Or if you want
to add the same meta-tag on all pages I run into this limitation. Is there better way to do this?
Generally - no. Roughly speaking if Salesforce doesn't allow you to do something it usually means there's pretty good hint you're doing in it wrong. I realize it sounds like I'm a fanboy but in reality - can you expand your question with concrete example why would you want to do something like that? For example governor limits are evil, annoying etc. - but they force you to write effective code that doesn't strain the database too much.
if you want to do something whenever objects get saved
That's what triggers are for. Ask yourself a question if the "action" you need to make should happen only from web UI or also when performed from API (mass data load, a smartphone application etc).
if you want to add the same meta-tag on all pages
You could maybe pull off similar result by adding a component to the sidebar. It won't cover all cases (like accessing Reports/Dashboards) but it's hard to say more without knowing what you're really after. Then again - custom VF page overrides won't help you when it comes to Reports either.
I wanted to add this as a comment, but was unable to.
Anyways, For the example that you mentioned in the comment, You can add that jQuery plugin in the Home page side bar component and activate the plugin only on those custom objects where you wnat to run this plugin. You might already know that we can deduce which object a record belongs to by looking at the 1st 3 letter of the record Id, using this logic, check if the record belongs to the custom object you want your plugin to act on and run the plugin.
But As eyescream has pointed out adding script in side bar has its own limitations: you cannot use the global variables , side bar components are not loaded on the reports and dashboard tabs etc.
-ಸಮಿರ್
I have a Silverlight solution that has multiple silverlight projects (Views) that all compile to their own .Xap file.
There is one "master" project that handles the dynamic downloading of the Xap files, which works pretty well.
But now I need to make sure that all the references are set to CopyLocal=false in all the View Projects. Only the "master" project can have CopyLocal=true.
This means that the Xap files generated by the Views stay rather small.
What I would like to do is check post or during the build process to see if any of the View projects have a reference with CopyLocal=true.
What would be a smart way of doing this? Using an external tool in the Post Build event? Or perhaps an addin for Visual Studio ? Or creating a macro in Visual Studio for this?
I have looked at using .extmap with assembly caching, but since you have to specify the assemblies in that, this does not solve my problem. I just need to know if there is a reference with the wrong setting and report that. Fixing it is not the question, that will still be done manually. It's just the notification I need.
Solution has 35 projects now, so dont want to check them all by hand every time.
I found a question similar to this one, but it lists msbuild as a possible solution. I would like to know if there is a way to do this using "code" (be it prebuilt in a tool/addin or otherwise)
I have chosen to go the Addin path. I created an addin that listens to : BuildEvents.OnBuildBegin
Whenever that event fires I create a list of all projects in the current solution. Doing a bit of recursive searching since there are also Solution folders that make life in DTE world a bit harder.
Then I loop through all the projects and cast them to a VSProject so I can loop through all the references.
Anytime I come accross a reference that is wrong, I create an ErrorTask where I set the Document property to the full solution path of the reference. To do this I Build the path for the project this reference is in, all the way up to the root of the solution.
The ErrorTask is then sent to an ErrorListHelper class I created, that handles the ErrorTasks and also performs navigation.
If I'm done with all the projects and I found any errors, I cancel the current build and show the Error List window, where my ErrorListHelper holds all the Reference Errors I created.
Whenever I want to navigate to the Reference in question, I activate the Solution Explorer window and get the root of it using an UIHierarchy.
Then I walk the path from the root on down, step by step, using the UIHierarchy to get to the UIHierarchyItems and expand them. Until I get to the deepest level (the reference) and I Select that.
Since I only need it for a certain solution and within that solution for certain projects (.Views.* and .ViewModels.*) I also have some checking for those in place during buildup of the Error List.
It works like a charm, already found 12 "wrong" References in 35 projects where I tought all were well.
I am using a different path now to do this. I have a base class that I can use to write unit tests that have access to the DTE2 object. This way I dont need an addin. This also works for Silverlight projects since the test class does not actually need access to the Silverlight projects, just being in the solution is enough to be able to iterate through the projects and check the references.
I'm doing a personal, just for fun, project that is using screen scraping to give me a System Tray notification in case another line on an HTML table is added, modified or deleted.
Having done this before I thought: well let's go with the regular expression thing and that's it, but being a curious person, made me think that there could be something else out there that could have another paradigm but be as simple to use.
I know about DOM and X-Path and all the xml'ish approaches. I'm looking for something outside the box, something that can even be defined in a set of rules so you can make a plugin system to aggregate various sites.
See Options for HTML Scraping
Here's an idea: assuming your main use case is getting a notification whenever an HTML file changes, why not use a standard diff tool and then loop through the changed lines, applying your rules?
Also, if this is a situation where you have access to the server and the files you're watching, you might be able to put everything under source control with CVS (or similar) and just watch for commits. If you want to use this approach for random sites on the web, just write a script that periodically downloads the html for the appropriate URLs and then commits it to source control and watch the diffs.
Not very practical, but outside the box.
If you can convert the source into valid XHTML/XML using something like SgmlReader or HtmlTidy then you could use XSLT. Simply create a XSL template for each site you wish to scrape.