Sitecore Analytics truncation - analytics

We have implemented Sitecore for our future site. We are going live soon with our corporate site. We had some testing done on this site to make sure everything was working right. There is some data that was written to the Analytics database due to this.
Now we are looking to get rid of this data and start fresh. We want to go the route of truncating the data out of the tables. Is there a way to do this? I know this could be done with a built-in functionality for the OMS but not the DMS. Also, what tables would be safe to truncate.
Thanks

Personally, I would start fresh by attaching an empty DMS database. You will need to redeploy any goals, page events, campaigns, etc., but it's a much safer option than truncating the tables.
Another thing to consider is that most (if not all) of the reports in the DMS are setup to accept a start and end date. Simply running your reports starting from the launch date may be all you need.
If you decide to truncate the tables, I would focus on any tables that have a Foreign Key relationship to the Visits table (the DMS ships with a Database Diagram that's really handy for stuff like this). Going in order that would be the PageEvents, Pages, and Profiles tables. Then it should be safe to clear out the Visits table.

Related

Alternative to Apatar for scripted data migration

I'm looking for the fastest-to-success alternative solution for related data migration between Salesforce environments with some specific technical requirements. We were using Apatar, which worked fine in testing, but late in the game it has started throwing the dreaded socket "connection reset" errors and we have not been able to resolve it - it has some other issues that are leading me to ditch it.
I need to move a modest amount of data (about 10k rows total) between several sandboxes and ultimately to a production environment. The data is spread across eight custom objects. There is a four-level-deep master-detail relationship, which obviously must be preserved.
The target environment tables are 100% empty.
The trickiest object has a master-detail and two lookup fields.
Ideally, the data from one table near the top of the hierarchy should be filtered by a simple WHERE, and then children not under matching rows not migrated, but I'll settle for a solution that migrates all the data wholesale.
My fallback in this situation is going to be good old Data Loader, but it's not ideal because our schema is locked down and does not contain external ID fields, so scripting a solution that preserves all the M-D and lookups will take a while and be more error prone than I'd like.
It's been a long time since I've done a survey of the tools available, and don't have much time to do one now, so I'm appealing to the crowd. I need an app that will be simple (able to configure and test very quickly), repeatable, and rock-solid.
I've always pictured an SFDC data migration app that you can just check off eight checkboxes from a source environment, point it to a destination environment, and it just works, preserving all your relationships. That would be perfect. Free would be nice too. Does such a shiny thing exist?
Sesame Relational Junction seems to best match what you're looking for. I haven't used it, though; so, I can't comment on its effectiveness for what you're attempting.
The other route you may want to look into is using the Bulk API or using the Data Loader CLI with Task Scheduling.
You may find this information (below), from an answer to a different question, helpful.
Here is a list of integration services (other than Apatar):
Informatica Cloud
Cast Iron
SnapLogic
Boomi
JitterBit
Sesame Relational Junction
Information on other tools, to integrate Salesforce with other databases, is available here:
Salesforce Web Services API
Salesforce Bulk API
Relational Junction has a unique feature set that supports cloning, splitting, and merging of Salesforce orgs, and will keep the relationships intact in a one-pass load. It works like this:
Download source org to an empty database schema (any relationship DBMS)
Download target org to a second empty database schema
Run some scripts to condition the data; this varies by object. Sesame provides guidance and sample scripts, but essentially you have to set a control field to tell Relational Junction to create or update Salesforce. This is also where you may need to replace source ID's with target ID's if some objects have been pre-populated during sandbox creation
Replicate the second database to the target org
Relational Junction handles the socket disconnects, timeouts, and whatever havoc happens during the unload/reload process gracefully and without creating duplicates.
This process was developed for a proof of concept at a large Silicon Valley network vendor in 2007, who became a customer. The entire down and up of 15 GB of data took 46 hours, plus about 2 days of preparation.

Building a web application with multiple database instances or just a single instance

I am currently designing a web application where I will have customers signing up as companies. Each company will have its own set of users. As I am designing this I am wondering which approach would work best. I see sites like fogbugz or basecamp which use subdomains. In cases with subdomains do you have a database instance per sub domain? I'm wondering if it is recommended to have a database instance per company or if I should have some kind of company table and manage the company and user data/credentials all from one database.
Which approach is best? Is there literature on this subject (i.e. any web or book)?
thanks in advance!
You have to weigh up your options, as some of this will be a matter of opinion and might not be feasible for your implementation.
That being said, I'd consider the single database approach, for these reasons:
Maintenance: when running a database per registered 'client', you will very easily reach a situation where any changes or upgrades you make to your app's schema have to be applied to every single database instance. This will get ridiculous, fast.
Convenience: You might want analytics and usage stats, or some way to administrate all these databases. Querying a single database is comparatively trivial to trying to aggregate the same query for all your databases. This isn't going to scale.
Scalability *: As mentioned in 2, you're going to require a special sort of aggregation to query things about your clients, and your app as a whole. The bigger your app gets, the more complex your querying. The other issue is, if one client uses the app a lot more than another, what will you be encouraged to optimise? Your app, the bigger client's database, or the smaller client's? Not forgetting anything you do change has to be copied to all databases.
Backups: You can backup one database easily, just by creating a dump and stashing it somewhere. Get a thousand clients and now you have to run 1000 database dumps, and name them well enough to be able to identify them if one single database corrupts. How will you even know if this happens? Database errors will be localised to that specific one, as opposed to your entire app.
UI: A user signs up or is invited to use your app, and belongs to one particular client. Are you going to save that user account to the client's database? If so, see scalability for the issue of working with that data when the user wants to change their password, or you want to email them. So, do you tell the user to let you know which database they're in so you can find them?
Simplification: You have a database per client and want to just use a single one. How do you merge them all together without significantly breaking things? There'll be primary key conflicts if you use auto incremented IDs; bookmarked URLs will break if you decide to just regenerate the keys; foreign keys across tables will no longer point to the right records. Your data integrity will go down the pan.
You mention 'white label' services that offer their product through custom subdomains. I'm not privy to how these work, but the subdomain is only a basic CNAME or A record in their DNS zonefile. The process of adding these can be automated, and the design of the application and a bit of server configuration can deal with linking these subdomains to the correct accounts and data. They're just URLs, so maybe on the backend, the app doesn't differentiate between:
http://client.example.com
http://example.com/client
Overall though, you may decide that all these problems are things you can and would prefer to deal with. Be warned, however, that by doing so you may be shooting yourself in the foot, and you can gain a lot more from crafting a well-designed single database schema and a well-abstracted front-end.
*#xQbert mentions the very real benefit of scalability with multiple databases. I've amended this answer to clarify that I was more concerned with other aspects.

Adding primary keys to a production database

I just inherited a relatively small SQL Server database. We have a decentralized system operating on about ten sites, with each site being pounded all day by between sixty and one hundred clients. Upon inspecting the system, a couple of things jumped out at me: there are no maintenance plans or keys defined.
I have dozens of different applications that are already accessing the database. The majority of them are written in C with inline SQL. Part of what I was brought in to do was write stored procedures for everything and have our applications move to that. Before I do this, however, I really think I should be focusing on these seemingly glaring issues.
Also, we'll eventually be looking into replication to a central site, so I really think these things should be addressed before we even think of that.
Figuring out a redesign scheme and maintenance plan will be time-consuming but not problematic - I've done it before at single sites. But, how am I going to go about implementing these major changes to the database across ten (or more) production sites while ensuring data integrity and not breaking the applications?
I would suspect that with no keys officaly defined, that this database probaly has tons of data integrity problems. Lucky you.
For replication you will need GUIDs. I would do this, Add the GUIDs and PK definitions in the dev environment and test test test. You'll prbably find alot of crap where people did select * and adding the columns will cause probnalem or cause things to show up on reports that you don't want. Find and fix all these things. Be sure to script allthe changes to the data and put them in source control along with any code changes you need to make to the application. Then schedule down time for maintenance of the database during the lowest usage hours. Let the users know the application will be down ahead of time. During the down time, have the application show a down message, change the datbase to single user mode so no one except the team making this change can affect the database, make a fullbackup, run the scripts to make the changes to the database, run the code to change the application, test, take the database out of single user mode and turn the application back on.
Under no circumstances would I try to make a change this major without going to single user mode.
First ensure you have valid backups of every db, and test-restore them to make sure they restore OK.
Consider using Ola Hallengren's maintenance vs. Maintenance Plans if you need to deploy identical, consistent, scripted solutions to all your sites (Ola Hallengren's site)
Then I'd say look at getting some basic indexing in place, starting with heavy-hitter tables first. You can identify them with various methods - presume you know how, but just to throw a few out thoughts: code review, SQL Trace, Query Plan analysis, and then there are 3rd party tools e.g., Idera SQLdm, Confio Ignite, Quest's Spotlight on SQL Server or Foglight Performance Analysis for SQL Server.
I think this will get you rolling.
Some additional ideas.
One of the first thing's I'd check is: are all the database instances alike, as far as database objects are concerned? Do they all have the exact same tables, columns (and their order in the tables), nullability, etc. etc. Be sure to check pretty much everything listed in sys.objects. Once you know that the database structures are all in synch, then you know that any database modification scripts you generate will work on all the instances.
Once you modify your test environment with your planned changes, you have to ensure that they don't break existing functionality. Can you accurately emulate "...being pounded all day by between sixty and one hundred clients" on your test environment? If you can't, then you of course cannot know if your changes will break anything until they go live. (An assumption I'd avoid: just because a given instance has no duplicates in the columns you wish to build a primary key on does not mean that there are never any duplicates present...)

An app to search a database

Im not a developer (not a proper one) but always up for an excuse for self-development.
I work in a support team for a reputable software vendor, and we currently use a helpdesk piece of software called iSupport.
- Its not a bad piece of kit, and im not sure if it has been set up badly (trying to find out) but the biggest problem I face (being on the front line as an analyst who uses it 8hrs a day) is its inability to easily search.
A new 'incident' will come in. Client will report some errors in a log, perhaps mention some other keywords when describing his symptoms.
Now I know, I have probably answered a similar problem before (but cant remember the solution) or even more likely, a fellow-analyst may have answered the same question before.
I would like to have the ability to have one search box (think Google) that searches through EVERY incident that has ever been created and return me ALL incidents that contain that keyword.
At the moment the search is very poor - You can take time to set up searches and specify which fields you want to search on, which values to filter by (perhaps by an analyst or category, etc) but this takes time and more often than not, it returns poor results and it would have been easier to try and track it down yourself manually.
All of the data sits in underlying SQL Server tables (have requested a subset of the data).
What im thinking, is creating a separate front-end that is just a basic search box and thats it. This app will point to all the relevant fields in the SQL tables and pull out the relevant records into a table. Once I have the ID for the incident, it is then a simple job to pull out that incident back in the iSupport front end.
I was thinking along the lines of Google Desktop style app (shortcut key brings up the search box).
Now thats as much thinking as ive done. Looking for some advice on where to go next.
I know for instance, that Google Desktop crawls and indexes all your physical files on your machine. Would I have to do something similar for a database as I imagine there maybe a large number of records/fields/tables to search through.
TBH, if it works, im not that fussed (to begin with) if it takes awhile to process the query, as long as it returns relevant results. But ideally id like it to be quick.
Ill leave it at that for now.
Where should I begin?
If .NET is your thing, then Lucene.NET will work well with SQL Server to give you that google search feeling.
The StackOverflow websites use it, you can listen to the SO Podcast where Jeff/Joel bitch about why SQL Server full-text search sucks so much.
I'd suggest this might be a good candidate for a web application - an asp.net / jsf website. This means that you can control it from one machine, but all your colleagues can make use of it without a deployment headache every time you add a new feature...
The incident database is mission critical (critical to your relationship with customers), so if you came to me with this request I would insist that you accessed the database through a user that had select permissions to the appropriate tables, and very little else. This from your point of view is a good thing too - let's you operate knowing you're not going to cock anything up...
The SSMS Tool Pack (an add-in to Management Studio) contains a feature to search Table, View or Database Data.
Have a look into the Full-Text Search functionality of SQL Server.
iSupport includes a full-text search function. Add a 'Global Search' widget to a dashboard you create.

django AuditTrail vs Reversion

I am working on an new web app I need to store any changes in database to audit table(s). Purpose of such audit tables is that later on in a real physical audit we can asecertain what happened in a situation, who edited what and what was the state of db at the time of e.g. a complex calculation.
So mostly audit table will be written and not read. Report may be generated though sometimes.
I have looked for available solution
AuditTrail - simple and that is why I am inclining towards it, I can understand it single file code.
Reversion - looks simple enough to use but not sure how easy it would be to modify it if needed.
rcsField seems to be very complex and too much for my needs
I haven't tried anyone of these, so I wanted to know some real experiences and which one I should be using. e.g. which one is faster uses less space, easy to extend and maintain?
Personally I prefer to create audit tables in the database and populate through triggers so that any change even ad hoc queries from the query window are stored. I would never consider an audit solution that is not based in the database itself. This is important because people who are making malicious changes to the database or committing fraud are not likely to do so through the web interface but on the backend directly. Far more of this stuff happens from disgruntled or larcenous employees than outside hackers. If you are using an ORM already, your data is at risk because the permissions are at the table level rather than the sp level where they belong. Therefore it is even more important that you capture any possible change to the dat not just what was from the GUI. WE have a dynamic proc to create audit tables that is run whenever new tables are added to the database. Since our audit tables populate only the changes and not the whole record, we do not need to change them every time a field is added.
Also when evaluating possible solutions, make sure you consider how hard it will be to revert the data to undo a specific change. Once you have audit tables, you will find that this is one of the most important things you need to do from them. Also consider how hard it will be to maintian the information as the database schema changes.
Choosing a solution because it appears to be the easiest to understand, is not generally a good idea. That should be lowest of your selction criteria after meeting the requirements, security, etc.
I can't give you real experience with any of them but would like to make an observation.
I assume by AuditTrail you mean AuditTrail on the Django wiki. If so, I think you'll want to instead look at HistoricalRecords developed by the same author (Marty Alchin aka #gulopine) in his book Pro Django. It should work better with Django 1.x.
This is the approach I'll be using on an upcoming project, not because it necessarily beats the others from a technical standpoint, but because it matches the "real world" expectations of the audit trail for that application.
As i stated in my question rcField seems to be to much for my needs, which is simple that i want store any changes to my table, and may be come back later to those changes to generate some reports.
So I tested AuditTrail and Reversion
Reversion seems to be a better full blown application with many features(which i do not need), Also as far as i know it saves data in a single table in XML or YAML format, which i think
will generate too much data in a single table
to read that data I may not be able to use already present db tools.
AuditTrail wins in that regard that for each table it generates a corresponding audit table and hence changes can be tracked easily, per table data is less and can be easily manipulated and user for report generation.
So i am going with AuditTrail.

Resources