Duplicate files with same content - filesystems

I was trying to implement a system where a user can save custom configurations.
My query to the teacher was "Why should I allow the user to have multiple custom configurations that are 100% same with different names?" To this query, my teacher responded with an example of the file system where I can save multiple duplicate files.
I am not very convinced by this response although it is true.
I want to know why do we allow the user to save duplicate files or in my case duplicate configurations? I believe it is just redundancy and wastage of available space which can be avoided.

Two configurations may be the same today, but next week one of them will be changed to do something different. Until then, it is a good idea to get used to loading ConfigA for JobA, and ConfigB for JobB. They are the same now, but next week ConfigB will change.

Related

How to provide visibility to users based on field criteria?

How do I provide visibility to records for users based on field criteria?
My demand is: When I have some specific object Files with some specific products the users that contact in this accounts that have this product will see this register of Files. Like:
I create a register on the object File with the field product fill with 'B'';
The account with some specific record type also has the field product filled with 'B' and 'C';
Because of that, the contacts (that are users in the community) present on this account will have access to the object File, because it contains 'B'.
How to reach this solution? I think about a trigger on the object File that will check the accounts that contain the same product, and then, create a sharing rule for that, but I don`t know if is the best option and also, because of the limit of 300. It's that any other way?
Uh, interesting one! You mean real Files (ContentDocument, ContentVersion), not some custom object, right? Files are bit tricky, normally community user would see all files attached to their account + special "Asset" files.
trigger on the object File (...) and then create a sharing rule
Don't think it'll work. Sharing rules are metadata, not data. You'd need a deployment or cheat by making API callout. But also sharing rules don't really work for community, you're supposed to use sharing sets.
You could try making ContentDocumentLinks between the file header (ContentDocument) and Account. And yes, you should be able to do it from trigger. I don't remember if there are limitations like 1 file can be linked at most to X records, this might be tricky. a change of Account's product would potentially mean lots of links to add/remove, maybe move this bit to #future / Queueable.
Alternatively you could just make all files & their folders visible in the community, maybe even for guest user (look into Asset files?). And just show / hide links to their folders based on what's on the account. Bit "security by obscurity" but well, fairly easy to do, adding/removing products wouldn't mean lots of operations. Depends if these files are somewhat sensitive or it's more about guiding the user to what they're interested in.
Ask on https://salesforce.stackexchange.com/ too, somebody can have even better ideas.

How to save user specific arrays in Mailchimp

For quite a while I am struggling with how to save custom user specific arrays of data in Mailchimp.
Simple example: I want to save the project ids for a user in Mailchimp and in best case be able to use them there properly as well. Let's say user fritz#frey.com has the 5 project ids 12345, 25345, 21342, 23424 and 48935. Why is there no array merge field that let's me save this array of project ids to a user?! (Or is there one and I am just blind...)
I know I can do drop down fields to put users in multiple groups, like types of projects for example, but the solution can hardly be a drop down with all (several thousand) project ids and I check the ones the user is a part of (and I doubt that Mailchimp would support that solution for a large number of group items anyways).
Oh and of course I could make the field myself by abusing a string field and connect the project ids with commas or a json string but that seems neither like a clean solution nor could I use the data properly in Mailchimp (as far as I know).
I googled quite a bit and couldn't find anything helpful sadly... :(
So? Can anybody enlighten me? :)
Thanks for all your help!
It sounds like you have already arrived at the correct answer: there is no "array" type, other than the interests type, which is global and not quite the same as an array.
The best solution here sort of depends on your data. If each project ID will have many different subscribers attached to it, and there won't be too many of them active at any given time, I'd just use interests. If you think there may be dozens of project ids active simultaneously, I'd not store this data on the subscribers at all, instead I'd build static segments for each project, and add users to them.
If projects won't have a bunch of subscribers associated, I'd store the data on your end and/or continue using the comma-separated string field.

How to deal with dependencies on database entries

I often write code that has a dependency on a database entity. I want to minimize the coupling between those different systems (or make it explicit and robust).
Example: I have a dropdown list of error categories, the users can define new categories, but one category is special in that errors belonging to it get an extra input field. So the system needs to know when the user has selected the special category and this special category is not allowed to just disappear.
How would you handle that special category? Would you match on on the category name or id? would you put the entity in the migration or have your code regenerate it as needed? Do you omit it from the database and have it only exist in your code? I find myself picking new solutions every time this problem presents itself, but I'm never quite satisfied with them.
Has anyone found a satisfactory solution? What drawbacks have you found and how have you mitigated them?
I dislike special case code, so I would design it to all be in the data model. The database would get a can delete field, and a has special entry field with some way to describe what that special input is. I would also try to make sure that I didn't over design the special input stuff since there is only this case so far.

Delete data or just flag it as deleted?

I'm building a website that lets people create vocabulary lessons. When a lesson is created, a news items is created that references the lesson. When another user practices the lesson, the user also stores a reference to it together with the practice result.
My question is what to do when a user decides to remove the lesson?
The options I've considered are:
Actually delete the lesson from
the database and remove all
referencing news items, practise
results etc.
Just flag it as deleted and
exclude the link from referencing
news items, results etc.
What are your thoughts? Should data never be removed, ala Facebook? Should references be avoided all together?
By the way, I'm using Google App Engine (python/datastore). A db.ReferenceProperty is not set to None when the referenced object is deleted as far as I can see?
Thanks!
Where changes to data need to be audited, marking data as deleted (aka "soft deletes") helps greatly particularly if you record the user that actioned the delete and the time when it occurred. It also allows data to be "un-deleted" very easily.
Having said that there is no reason to prevent "hard deletes" (where data is actually deleted) as an administrative function to help tidy up mistakes.
Marking the data as "deleted" is simplest. If you currently have no use for it, this keeps everything in your database very tidy and makes it easy to add new functionality.
On the other hand, if you're doing something like showing the user where their "vocabulary points" came from, or how many lessons they've completed, then the reference to soft deleted items might be necessary.
I'd start with the first one and change it later if you need to. Here's why:
If you're not using soft deletes, assume they won't work in the way that future requests actually want them to. You'll have to rewrite them anyway.
If you are using them, assume that nobody is using the feature which uses them. Now you've done a lot of work and tied yourself into maintenance of something nobody cares about.
If you create them, you'll find yourself creating a feature to use them. See the above.
If you don't create them, you can always create them later, once you have better knowledge about what the users of your system really want.
Not creating soft deletes gives you more options going forward. Options have value. Options expire. Never commit early unless you know why.

Find redundant pages, files, stored procedures in legacy applications

I have several horrors of old ASP web applications. Does anyone have any easy ways to find what scripts, pages, and stored procedures are no longer needed? (besides the stuff in "old___code", "delete_this", etc ;-)
Chances are if the stored proc won't run, it isn't being used because nobody ever bothered to update it when sonmething else changed. Table colunms that are null for every single record are probably not being used.
If you have your sp and database objects in source control (and if you don't why don't you?), you might be able to reaach through and find what other code it was moved to production with which should give you a clue as to what might call it. YOu will also be able to see who touched it last and that person might know if it is still needed.
I generally approach this by first listing all the procs (you can get this from the system tables) and then marking the ones I know are being used off the list. Profiler can help you here as you can see which are commonly being called. (But don't assume that because profiler didn't show the proc that it isn't being used, that just gives you a list of the ones to research.) This makes the ones that need to be rearched a much smaller list. Depending on your naming convention it might be relatively easy to see what part of the code should use them. When researching don't forget that procs are called in places other than the application, so you will need to check through jobs, DTS or SSIS packages, SSRS reports, other applications, triggers etc to be sure something is not being used.
Once you have identified a list of ones you don't think you need, share it with the rest of the development staff and ask if anyone knows if the proc is needed. You'll probably get a a couple more taken off the list this way that are used for something specialized. Then when you have the list, change the names to some convention that allows you to identify them as a candidate for deletion. At the same time set a deletion date (how far out that date is depends on how often something might be called, if it is called something like AnnualXYZReport, then make that date a year out). If no one complains by the deletion date, delete the proc (of course if it is in source control you can alawys get it back even then).
Onnce you have gone through the hell of identifying the bad ones, then it is time to realize you need to train people that part of the development process is to identify procs that are no longer being used and get rid of them as part of a change to a section of code. Depending on code reuse, this may mean searching the code base to see if someother part of the code base uses it and then doing the same thing discussed as above, let everyone know it will be deleted on this date, change the name so that any code referncing it will break and then on the date to delete getting rid of it. Or maybe you can have a meta data table where you can put candidates for deletion at the time you know that you have stopped using something and send a report around to everyone once a month or so to determine if anyone else needs it.
I can't think of any easy way to do this, it's just a matter of identifying what might not be used and slogging through.
For SQL Server only, 3 options that I can think of:
modify the stored procs to log usage
check if code has no permissions set
run profiler
And of course, remove access or delete it and see who calls...

Resources