2SXC/DNN - Delete ADAM Files in Entity - dotnetnuke

We're designing a system for a client where they are allowing authenticated users to upload images. We've created an API to upload the files but the client only wants the latest file and delete all previous ones so that there would only ever be one.
We've looked through the docs and can't come across a way for ADAM to handle this in both 2SXC and DNN's file system.
Internally when deleting images we see API calls like the following to the internal 2SXC API, but we're wondering if this is exposed somewhere within the public API?
https://somedomain.com/api/2sxc/app/auto/data/61393528-b401-411f-a001-f423ea46700a/b7d04e2c-c565-496c-8efb-aa133cf90d33/Photo/delete?subfolder=&isFolder=false&id=189&usePortalRoot=false&appId=3
We could probably use the same endpoint above, but we'd likely run into permission issues or changes to the APIs that could be problematic.
Thank you for any advice you can give! Perhaps #iJungleBoy can provide some thoughts on this.

As a solution from a completely different direction, if you are on the later release of 2sxc (v12.8+, v13+), and comfortable programming in C#, you might consider doing this as a "cleanup" from a Dnn Scheduled Task. This can be done with a relatively easy setup. We have a Gist in place that we use as a starter. You simply put the code in the /App_Code folder then setup a normal Dnn Scheduled Task. NOTE that you can scroll down to the first comment on the Gist to see a screenshot of a complete working setup.
Accuraty's AccuTasks template on GitHub Gists
There are two more key things to note:
You need to install Dnn's CodeDom 3.6 because the example uses the later versions C#'s string interpolation - OR remove the few $"ASL2021 - {this.GetType().Name}, Task Scheduled Email", bits or convert to string.Format() or something.
Since your task's code is NOT running in a (2sxc) module, if needed, you'll do stuff like this: 2sxc Docs - Use 2sxc Instance or App Data from External C# Code
So, if you are comfortable writing code that "finds and deletes stuff older than NN days" - this might be the way to go.

Related

Where or How can I find a Complete List of Available Services for DNN and 2sxc

I've been trying to both convert old code and write new code using GetScopedService().
However, I keep discovering ones I didn't know about.
Is there an easy way to find the complete list of services available for 2sxc? For DNN? And maybe even RazorBlade?
If they are not documented somewhere, is there a page of code in the public repositories that I could bookmark where it would be easy to see (and compile) a list of them?
Your best place to start is https://r.2sxc.org/services (which goes to https://docs.2sxc.org/api/dot-net/ToSic.Sxc.Services.html)
This is where we keep all the current services published. Other services are to be seen as exotic / rare use.
Razor-Blade is still mostly non-service, but we plan to fix that.
We're just about to release ServiceKits as a feature, which would make things even more intuitive. For example, ServiceKit14 has all commonly used services on it, and also IScrub from Razor Blade.

Is there an automated way to document Nancy services?

Is there any way to auto-generate Swagger documentation (or similar) for a Nancy service?
I found Nancy.Swagger, but there's no information on how to use it and the demo application doesn't seem to demonstrate generating documentation (if it does, it's not obvious).
Any help would be appreciated. Thanks!
In my current project I've been looking a lot into this problem. I used both nancy.swagger and nancy.swagger.attributes.
I quickly discarded Nancy.swagger, because for me personally it doesn't sound right that you have to create a pure documentation class for each nancy module. The attributes solution was a bit "cleaner" - at least codebase and documentation were in one place. But very fast this became unmaintainable. Module code is unreadable because of many attributes. Nothing is generated automatically: you have to put path, all parameters, even http method as an attribute. This is a huge effort duplication. Problems came very fast, a few examples:
I changed POST to PUT in Nancy and forgot to update [Method] attribute.
I added a parameter but not the attribute for it.
I changed parameter from path to query and didn't update the attribute.
It's too easy to forget to update the attributes (let alone documentation module solution), which leads to discrepancies between your documentation and actual code base. Our UI team is in another country and they had some trouble using the APIs because docu just wasn't up-to-date.
My solution? Don't mix code and documentation. Generating docu from code (like Swashbuckle does) IS ok, but actually writing docu in code and try to dublicate the code in docu is NOT. It's not better than writing it in a Word document for your clients.
If you want Swagger docu, just do it the Swagger way.
- Spend some time with Swagger.Editor and really author your API in
YAML. It looks all-text and hard, but once you get used to it, it's
not.
- Spend some time with Swagger.Codegen and adapt it (it already does a fair job for generating Nancy server code and with a few
adjustments to moustache templates it was just what I needed).
- Automate your process: write a couple of batches to generate your modules and models from yaml and copy them to your repository.
Benefits? Quite a few:
-
Your YAML definition is now the single truth of your REST contract.
If somewhere something is defferent, it's wrong.
Nancy server code is auto-generated
Client code-bases are auto-generated (in our case it's android, ios and angular)
So whenever I change something in REST contract, all codebases are regenerated and added to projects in one batch. I just have to tell the teams something was updated. They don't have to look through some documents and search for it. They just have their code regenerated and probably see some compile errors, in case of breaking changes.
Do I still use nancy.swagger(.annotations)?
Yes, I do use it in another project, which has just one endpoint with a couple of methods. They don't change often. It's not worth the effort to set up everything, I have my swagger docu fast up and running. But if your project is big, API is changing, and you have multiple code-bases depending on your API, my advice is to invest some time into a real swagger setup.
I am quoting the author answer here from https://github.com/khellang/Nancy.Swagger/issues/59
The installation should be really simple, just pull down the NuGet package, add metadata modules to describe your routes, and hit /api-docs. That should get you the JSON. If you want to add swagger-ui as well, you have to add that manually right now.
No. Not in an automated. https://github.com/yahehe/Nancy.Swagger needs lots of manually created metadata.
There is a nice article here: http://www.c-sharpcorner.com/article/generating-api-document-in-nancy-using-swagger/
Looks like you still have to add swagger-ui separately.

Best Practices for Managed SalesForce App Development?

We're developing applications for AppExchange and are trying to figure out the best way to do development and release management. There are several issues around this:
1) Package Prefixes. We are developing code in unmanaged mode and releasing as managed, so we have to add all the package prefixes into the code. Is there a way to do this dynamically at runtime? Right now we're using an Ant script, which stops us benefitting from the force.com IDE plugin.
2) Resource files... We are doing some ajax-ey stuff and as a result have a few different resource files we upload, some of which are multiple file resources (zip files). Has anyone automated the building of these resources using ANT, and does that work well?
Our environment seems very fragile and works for some developers and not others; have other people had this problem? How did you resolve it?
I hate to say it, but it sounds like you've settled on the best approach that I know of. The Salesforce packaging environment can be a total nightmare to work with. Once your managed package has a prefix, there's really no going back to a plain package without one unless you script it like you've done. So you'll find the package name peppered throughout your code, which the system will add for you.
I've found the best way to work with it is to keep a "pure" version of your app, which will install cleanly into a dev org from within Ant. Once you have the code in Ant, it can be added into "normal" source control. It doesn't seem like too many larger scale apps have been built in Salesforce with multiple team members, because as far as I can tell, there isn't much support for a workflow that includes source code control. They tried adding some type of release management to a dev org configuration, which is now in beta, but it didn't seem that good at all.
I think Ant using the Salesforce Force.com migration tool is the way to go for the most part. Then, however, once you want to make a managed package, you're sort of stuck with that code base frozen, with that prefix, where you'll then have to do packaging releases (from beta, etc) from within the packaging system itself. The best way there is to refresh to sandbox (hard limit of once a month!!), then have developers pull out of that sandbox and deploy into individual dev orgs, which then can be merged periodically into a "group dev org", before deploying back into the Sandbox (using Force.com IDE or Ant), then into Production.
The whole process is basically a complete disaster. Salesforce is so close to having a super powerful platform, but a lot of the time feels like an awesome sports car without a steering wheel.
As far as static resources, those you should be able to automate in a relatively straightforward way using Eclipse, so that you can deploy those separately in one step. The API should support it, too.
I have worked on some rather large Apex code bases (I think, and hope), and there is really no apparent elegant solution, I'm afraid. You'll be stuck with strange combinations of deploying using Ant in some cases, Eclipse others, etc.
Coming from other development environments, it's often befuddling and just strange. For example, it's perplexing that you can't easily dump the database in one step while keeping track of relationships between objects and then "import" it into another org in one step. We actually had to write a tool that would make it easy to extract all data while traversing object relationships, load all data, recursively delete data, etc. from a xls file because we needed an easy way to test in orgs.
BTW, dev orgs are basically throw away orgs. We create dozens of them for different testing purposes and to keep different versions and configurations.
Sorry I couldn't give you better news. There might be more of a guru on here who can point to an elegant way to manage packaging, and I'll be as interested in you as the answer! You can email me at suprasphere --- at --- gmail if you want to commiserate! :)
We've recently switched to using a Prefix Manager instead of doing ant substitutions.
Here is our code.
public class PrefixMgr {
private static string objPrefix = null;
public static string getObjPrefix() {
if(objPrefix == null) {
try {
Database.query( 'select MyColumn__c from my_prefix__MySmallTable__c' );
objPrefix = 'my_prefix__';
}
catch(Exception e) {
objPrefix = '';
}
}
return objPrefix;
}
public static string getAppPrefix() {
return 'my_prefix__';
}
public static string getObjName(string inp) {
return getObjPrefix() + inp;
}
}
Basically this attempts a query (one time) against a table with the prefixed name. If it does not exist, then we are in unmanaged mode with no package prefixes. If it does succeed, then we set the prefix appropriately. getObjName is a convenience because PrefixMgr.getObjName('MyObject__c') is easier to read (esp in a string concat) than PrefixMgr.getObjPrefix() + 'MyObject__c'.
Interested in thoughts and comments.

Feature tracking WinForms

I would like to extend my WinForms app, which a feature that allows me to monitor which functions are used by the users.
The idea is to count how many times e.g. a button has been clicked, or a popup was opened.
I want to know which features are used more or less often by the users.
Any ideas how this can be done? (Or even if somebody solved this problem already)
tia,
Martin
The only mechanism I can think on to do what your looking for is to use a logger like log4net / Log4PostSharp to log details to a log file on the machine, this would give you details on usage for that particular client. You would have to create a custom attribute that you could decorate your methods with that would result in something being written out to the log file, otherwise your code would end up littered with code to implement the logging!
Have a look at this article too, it uses Log4PostSharp with AOP (Aspect Oriented Programming) which would make the implementation of the logging much more cleaner (uses attributes).
http://www.codeproject.com/KB/dotnet/log4postsharp-intro.aspx
You can find some if you google for the term "application analytics" instead of "feature tracking".
I have found the following products:
includeapp.com
Software Statistics Service
Dotfuscator for .NET, DashO for Java
FusionAnalytics
Flurry Analytics
OpenSpan Desktop Analytics
DeskMetrics
EQATEC Analytics
Rapidengines
I might say that I also plan to create such a product. When it will be Beta I will add it to the list.

How to merge Drupal database changes

We currently use an SVN repository to ensure everyone's local environments are kept up-to-date. However, Drupal website development is somewhat trickier in that any custom code you write (for instance, PHP code written for a node body) is stored in the DB and the changes aren't recognized by the SVN working copy.
There are a couple of developers who are presently working on the same area of a Drupal site, but we're uncertain about how to best merge our local Drupal database changes together. Committing patches of database dumps seem clumsy at best and is most likely inefficient and error-prone for this purpose.
Any suggestions about how to approach this issue is appreciated!
Unfortunately, database deployment/update is one of Drupals weak spots. See this question & answers as well as this one for some suggestions on how to deal with it.
As for CCK, you could find some hints here.
As for php code in content, I agree with googletorp in that you should avoid doing this. However, if for some reason you absolutely have to do it, you could try to reduce the code to a simple function call. Thus you'd have the function itself in a module (and this would be tracked via SVN). But then you are only a little step from removing the need for the inline code anyways ...
If you are putting php code into your database then you are doing it wrong. Some stuff are inside the database like views and cck fields plus some settings. But if you put php code inside the node body you are creating a big code maintenance problem. You should really use the API and hooks instead. Create modules instead of ugly hacks with eval etc.
All that has been said above is true and good advice.. To answer your practical question, there are a number of recent modules that you could use to transport the changes done by the various developers.
The "Features" modules is a cure the the described issue of Drupal often providing nice features, albeit storing lots of configs and structure in the DB. This module enables you to capture a feature and output it as a pseudo-module (qualifies as a module with .info and code-files and all). Here is how it works:
Select functionality/feature to export
The module analyses the modules, files, DB content that is required to rebuild that feature elsewhere
The module creates a pseudo-module that contains the instructions in #3 and outputs everything (even SQL to rebuild the stuff in the DB) into a module package (as well as sets dependencies for other modules required)
Install the pseudo-module on your new site and enable it
The pseudo-module replicates the feature you exported rebuilding DB data and all
And you can tell your boss you did it all manually with razor focus to avoid even 1 error ;)
I hope this helps - http://drupal.org/project/features
By committing patches of database dumps, do you mean taking an entire extract of the db and committing it after each change?
How about a master copy of the database? Extract all tables, views, sps, etc... into individual files, put them into svn and do your merge edits on the individual objects?

Resources