I need to add columns to a salesforce report dynamically(based on particular conditions). I'm planning to do this with a trigger that is looking for my conditions. My two questions,
Is it possible to adding columns dynamically for a Report?
Can we schedule triggers based on time intervals instead of database updates?
Thanks, BR
Madura
I'm not aware of any possibility to manipulate Reports from Apex. Report definitions can be retrieved and modified with Metadata API (the one used in Eclipse IDE for example) but that means you'd have to use hacks since Metadata API is not easily available in Apex.
It's a kind of "known problem" and many people have already researched it:
http://boards.developerforce.com/t5/Apex-Code-Development/Is-it-possible-to-call-Metadata-API-from-Apex-code-Getting-Error/td-p/119412
https://github.com/financialforcedev/apex-mdapi - looks really interesting I'd say
https://salesforce.stackexchange.com/questions/1082/has-anyone-ever-successfully-invoked-the-metadata-api-from-within-apex
Do you really think that some kind of "dynamic report" is a valid solution for business need though? I mean - users would be confused if they added some columns to the report and next day the report definition will change wiping out their work...
As for the other question - you probably shouldn't use the word "trigger" ;) If you want some Apex to run in time intervals you should have a lookt at job scheduling (write a class that implements Schedulable) and then you can schedule it to run at specific times. Without special tweaking the job can fire even every hour.
Of course there's also option of time-based workflows that would perform a field update and cause some real trigger to fire but that's very data-centric - no guarantees that it will run at time intervals.
Related
I have a task where I need to get inputs from user and perform some updates to the DB table. Now I knwo this is something needs to be done through a UI, but due to some limitations I have been assigned a task to do this through SSRS report. I know this is possible to do but is it a good practise to do updates or insert through a SSRS report?
First, SSRS is not designed for this kind of thing and, May be someone who has done this, I would advise you consider an Access Form against a SQL back-end, Sharepoint or an ASP.NET web form
I agree with Zaynul, this is not good practice. Interface wise, user expectation from reports is not that it causes updates or inserts, but retrieves data. Report automation tools (largely out of your control) like subscriptions could have this running in a way you don't want. It's probably being used as a substitute from writing an Interface element in a "program". Finally, lack of control of the parameter fields makes validating inputs more painful (compare with say, VB). If the drawbacks are unclear to you, you should probably avoid this path altogether.
If you need to do something like this bear in mind the drawbacks and take precautions
Confirmation parameters "Update? Y/N"
Preview what will be updated or inserted before allowing an update
Is there a way to "reverse" the update or insert
Is the change stored so there is an audit trail for changes done this way
Follow the same rules you'd use if you wrote a real interface/program for updates
before I start I'd like to apologize for the rather
generic type of my questions - I am sure a whole book
could be written on that particular topic.
Lets assume you have a big document database with multiple document schemas
and millions of documents for each of these schemas.
During the life time of the application the need arises to change the schema
(and content) of the already stored documents frequently.
Such changes could be
adding new fields
recalculating field values (split Gross into Net and VAT)
drop fields
move fields into an embedded document
I my last project where we used a SQL DB we had some very similar challanges
which resulted in some significant offline time (for a 24/7 product) when the
changes became to drastic as SQL DBs usually do a LOCK on a table when
changes occur. I want to avoid such a scenario.
Another related question is how to handle schema changes from within the
used programming language environment. Usually schema changes happen by
changing the Class definition (I will be using Mongoid a OR-Mapper for
MongoDB and Ruby). How do I handle old versions of documents that do not
conform any more to my latest Class definition.
That is a very good question.
The good part of document oriented databases as MongoDB is that documents from the same collection doesn't need to have the same fields. Having different fields do not raise an error, per se. It's called flexibility. It also a bad part, for the same reasons.
So the problem and also the solution comes from the logic of your application.
Let say we have a model Person and we want to add a field. Currently in the database we have 5.000.000 people saved. The problem is: How do we add that field and have the less downtime?
Possible solution:
Change the logic of the application so that it can cope with both a person with that field and a person without that field.
Write a task that add that field to each person in the database.
Update the production deployment with the new logic.
Run the script.
So the only downtime is the few seconds that it takes to redeploy. Nonetheless, we need to spend time with the logic.
So basically we need to choose which is more valuable the uptime or our time.
Now let say we want to recalculate a field such as the VAT value. We can not do the same as before, because having some products with VAT A and other with VAT B doesn't make sense.
So, a possible solution would be:
Change the logic of the application so that it shows that the VAT values are being updated and disable the operations that could use it, such as buys.
Write the script to update all the VAT values.
Redeploy with the new code.
Run the script. When it finish:
Redeploy with the full operation code.
So there is not absolute downtime, but just partial shutdown of some specifics part. The user could keep seeing the description of products and using the other parts of the application.
Now let say, that we want to drop a field. The process would be pretty much the same as the first one.
Now, moving fields into embed documents; that's is a good one! The process would be similar to the first one. But instead of checking the existence of the field we need to check if it is a embedded document or a field.
The conclusion is that with a document oriented database you have a lot of flexibility. And so you have elegant options at your hands. Whether you use it or not depends or whether you value more you development time or your client's time.
I am working on an new web app I need to store any changes in database to audit table(s). Purpose of such audit tables is that later on in a real physical audit we can asecertain what happened in a situation, who edited what and what was the state of db at the time of e.g. a complex calculation.
So mostly audit table will be written and not read. Report may be generated though sometimes.
I have looked for available solution
AuditTrail - simple and that is why I am inclining towards it, I can understand it single file code.
Reversion - looks simple enough to use but not sure how easy it would be to modify it if needed.
rcsField seems to be very complex and too much for my needs
I haven't tried anyone of these, so I wanted to know some real experiences and which one I should be using. e.g. which one is faster uses less space, easy to extend and maintain?
Personally I prefer to create audit tables in the database and populate through triggers so that any change even ad hoc queries from the query window are stored. I would never consider an audit solution that is not based in the database itself. This is important because people who are making malicious changes to the database or committing fraud are not likely to do so through the web interface but on the backend directly. Far more of this stuff happens from disgruntled or larcenous employees than outside hackers. If you are using an ORM already, your data is at risk because the permissions are at the table level rather than the sp level where they belong. Therefore it is even more important that you capture any possible change to the dat not just what was from the GUI. WE have a dynamic proc to create audit tables that is run whenever new tables are added to the database. Since our audit tables populate only the changes and not the whole record, we do not need to change them every time a field is added.
Also when evaluating possible solutions, make sure you consider how hard it will be to revert the data to undo a specific change. Once you have audit tables, you will find that this is one of the most important things you need to do from them. Also consider how hard it will be to maintian the information as the database schema changes.
Choosing a solution because it appears to be the easiest to understand, is not generally a good idea. That should be lowest of your selction criteria after meeting the requirements, security, etc.
I can't give you real experience with any of them but would like to make an observation.
I assume by AuditTrail you mean AuditTrail on the Django wiki. If so, I think you'll want to instead look at HistoricalRecords developed by the same author (Marty Alchin aka #gulopine) in his book Pro Django. It should work better with Django 1.x.
This is the approach I'll be using on an upcoming project, not because it necessarily beats the others from a technical standpoint, but because it matches the "real world" expectations of the audit trail for that application.
As i stated in my question rcField seems to be to much for my needs, which is simple that i want store any changes to my table, and may be come back later to those changes to generate some reports.
So I tested AuditTrail and Reversion
Reversion seems to be a better full blown application with many features(which i do not need), Also as far as i know it saves data in a single table in XML or YAML format, which i think
will generate too much data in a single table
to read that data I may not be able to use already present db tools.
AuditTrail wins in that regard that for each table it generates a corresponding audit table and hence changes can be tracked easily, per table data is less and can be easily manipulated and user for report generation.
So i am going with AuditTrail.
I have to develop a database for a unique environment. I don't have experience with database design and could use everybody's wisdom.
My group is designing a database for piece of physics hardware and a data acquisition system. We need a system that will store all the hardware configuration parameters, and track the changes to these parameters as they are changed by the user.
The setup:
We have nearly 200 detectors and roughly 40 parameters associated with each detector. Of these 40 parameters, we expect only a few to change during the course of the experiment. Most parameters associated with a single detector are static.
We collect data for this experiment in timed runs. During these runs, the parameters loaded into the hardware must not change, although we should be able to edit the database at any time to prepare for the next run. The current plan:
The database will provide the difference between the current parameters and the parameters used during last run.
At the start of a new run, the most recent database changes be loaded into hardware.
The settings used for the upcoming run must be tagged with a run number and the current date and time. This is essential. I need a run-by-run history of the experimental setup.
There will be several different clients that both read and write to the database. Although changes to the database will be infrequent, I cannot guarantee that the changes won't happen concurrently.
Must be robust and non-corruptible. The configuration of the experimental system depends on the hardware. Any breakdown of the database would prevent data acquisition, and our time is expensive. Database backups?
My current plan is to implement the above requirements using a sqlite database, although I am unsure if it can support all my requirements. Is there any other technology I should look into? Has anybody done something similar? I am willing to learn any technology, as long as it's mature.
Tips and advice are welcome.
Thank you,
Sean
Update 1:
Database access:
There are three lite applications that can write and read to the database and one application that can only read.
The applications with write access are responsible for setting a non-overlapping subset of the hardware parameters. To be specific, we have one application (of which there may be multiple copies) which sets the high voltage, one application which sets the remainder of the hardware parameters which may change during the experiment, and one GUI which sets the remainder of the parameters which are nearly static and are only essential for the proper reconstruction of the data.
The program with read access only is our data analysis software. It needs access to nearly all of the parameters in the database to properly format the incoming data into something we can analyze properly. The number of connections to the database should be >10.
Backups:
Another setup at our lab dumps an xml file every run. Even though I don't think xml is appropriate, I was planning to back up the system every run, just in case.
Some basic things about the design; you should make sure that you don't delete data from any tables; keep track of the most recent data (probably best with most recent updated datetime); when the data value changes, though, don't delete the old data. When a run is initiated, tag every table used with the Run ID (in another column); this way, you maintain full historical record about every setting, and can pin exactly what the state used at a given run was.
Ask around of your colleagues.
You don't say what kind of physics you're doing, or how big the working group is, but in my discipline (particle physics) there is a deep repository of experience putting up and running just this type of systems (we call it "slow controls" and similar). There is a pretty good chance that someone you work with has either done this or knows someone who has. There may be a detailed description of the last time out in someone's thesis.
I don't personally have much to do with this, but I do know this: one common feature is to have no-delete-no-overwrite design. You can only add data, never remove it. This preserves your chances of figuring out what really happened in the case of trouble
Perhaps I should explain a little more. While this is an important task and has to be done right, it is not really related to physics, so you can't look it up on Spires or on arXive.org. No one writes papers on the design and implementation of medium sized slow controls databases. But they do sometimes put it in their dissertations. The easiest way to find a pointer really is to ask a bunch of people around the lab.
This is not a particularly large database by the sounds of things. So you might be able to get away with using Oracle's free database which will give you all kinds of great flexibility with journaling (not sure if that is an actual word) and administration.
Your mentioning of 'non-corruptible' right after you say "There will be several different clients that both read and write to the database" raises a red flag for me. Are you planning on creating some sort of application that has a interface for this? Or were you planning on direct access to the db via a tool like TOAD?
In order to preserve your data integrity you will need to get really strict on your permissions. I would only allow one (and a backup) person to have admin rights with the ability to do the data manipulation outside the GUI (which will make your life easier).
Backups? Yes, absolutely! Not only should you do daily, weekly and monthly backups you should do full and incremental. Also, test your backup images often to confirm they are in fact working.
As for the data structure I would need much greater detail in what you are trying to store and how you would access it. But from what you have put here I would say you need the following tables (to begin with):
Detectors
Parameters
Detector_Parameters
Some additional notes:
Since you will be doing so many changes I recommend using a version control like SVN to keep track of all your DDLs etc. I would also recommend using something like bugzilla for bug tracking (if needed) and using google docs for team document management.
Hope that helps.
I am having to build a web app that has an event calendar section. Like in Outlook, the requirement is that users can set up recurrent events, and can move individual events around within a series of events.
What methods could one use to store (in a database) the various ways you can describe the recurrence pattern of a series?
How would one record the exceptions?
What strategies do you use to manage redefining the series and its effects on the exceptions?
I've done this a couple of times, differently, but I'd like to see how others have tackled this issue.
Have a look at how the iCal format deals with recurrence patterns and recurrence exceptions. If you want to publish the events at some point, you will have a hard time avoiding iCal anyway, so you could just as well do it in a compatible way from the start.
For one thing: if you're not already familiar with it, take a look at RFC 5545 (which replaces RFC 2445) which defines the iCalendar specification for exactly this kind of pattern.
I've typically provided front-end logic that allows a user to specify a recurring event, but then actually used individual database entries to record the events as seperate records in SQL server.
In other words, they can specify a meeting every Monday night at 7PM, but I record 52 records for the year so that individual meetings can be changed, deleted or additional information added to those events.
I provide methods to allow the user to cancel all future events, and then re-enter a new recurring series if they need to.
I've not come up with a perfect way to handle this, so I'll will monitor this thread to see if any great suggestions come up.