We have noticed that when after retracting a MOSS solution package, we are still left with incorrect leaf entries in the alldocs MOSS database table. This is an issue if for example we rename a feature that deploys the same artifacts - MOSS will then not let us deploy the solution as it thinks these items already exist.
Would be interested to hear if anyone else has had this problem.
Depending on the solution, SharePoint doesn't always clean everything up. However you must never touch the SharePoint database! Even querying it is not supported as this can cause locking issues that will make the application unreliable. Also see KB 841057.
There should always be a way to solve the problem via the SharePoint API. Once you've found it, add the clean up code to a feature receiver so that it executes when the feature is deactivated. If you need help, please ask in a new question with the code/schema you are using.
Depending on the error you are receiving, the tools on these pages may also help:
WssRemoveFeatureFromSite
Dealing with abducted SharePoint features
I Agree with Alex, this shouldn't concern you and the fact that you've noticed this means that you have more than a a healthy interest in the sharepoint DB.
Practical explanations maybe that you created assets (e.g. listitems) that have references to the solution (content types, site columns etc) and so these are orphaned when the solution is removed, of course when you re install the solution these assets will work correctly so its not all bad.
Related
My site is messed up and I am trying to fix it, and regardless of it I get help, it is going to take awhile likely, and it's really important that my site be live, even if it's a crappy version with just the articles and no template.
Would it not work to make a backup of the database, install Joomla fresh (the same version) and connect it to that duplicate database (then point my domain there) and then go back to working on fixing the current site that is live now? Are there any issues I should know about going in? There's a good chance the issues are related to the template or extensions (at least my understanding so far, see my other post for details on the issue) so I would think it would be faster to do this to get a working site rather than trying to turn off and on each extension, especially when I have to do it manually (and I don't know how yet) as I can't access the backend.
If this will work, do I choose the database when I install or just install empty and then change what database it connects to or do i install empty and import the tables (and how)? Still have to figure out if I can make a clone of the database and not all the files as it takes hours.
Thanks for the help, and if I should have appended this to the other post I apologize, but I figured its a separate issue.
First, ensure you have backups of both the files and the database. Then make a local copy of your site where you will work later.
The infection may lie:
in the Joomla core files, with extra content (which is usually fairly easy to spot, for example an eval of a large base64-encoded variable);
in extra files (keep in mind that even images could contain malicious code), these would be usually triggered outside of Joomla for spamming or other nefarious purposes
in the database content.
Fix:
Apply a fresh Joomla update package over your site; you will only fix n.1 above. This may restore some functionality for the first hour of survival.
Analyse the logs, and try to figure out how they got in. You need to step up security as obviously what you have is not enough.
Install a fresh Joomla, add all extensions that your site uses, copy the images folder, then connect it to a copy of the compromised database. This will fix n.1 and 2 above (as you got rid of any extra files). This may survive until they figure out you fixed it; but if you haven't patched your security, they will hack into your site again. Keep a copy of this, and restore as needed as you proceed with the following step.
Export the db to sql format (mysqldump or phpmyadmin may come in handy), then search for any xss traces, php code, javascripts that may have been injected. Since a complete control could take days, and assuming the malicious code links elsewhere, look for strings such as "https://" and "http://"; escape / as \/ and \\\/ to account for json-encoded data as well.
Once the db is clean, your local copy is reasonably safe; update all extensions and Joomla, and use it to restore the website until you fix your security.
It might work, i mean cloning the DB as far as joomla version is the same. It won't break like that, but may fail if files for extensions are not found. This is somewhat wrong, the question is how many extensions you are using and how much cleansing you need.
On the other side you mention that the site should be 'live'. Just do everything on localhost, test, fix templates, etc. Then if you're sure you're done, use akeeba backup and deploy new version to your server without long delays.
Any kind of cleansing needs some start.
You can clean the site while live, depends on complexity.
Clean might be done offline and deployed.
Sometimes import/export custom routines are needed, so you have to make own tools for everything. It occurs with large data, like when people used to made mess inside images folder or something like that.
4 ...
It's pointless to make copies of DB. You install the same version of Joomla on your local server, then you install the same template, you copy styles etc.
Then you import data with your own tools or paid ones. Estimated time is from few hours to few days, it's just data :)
I was just hired to maintain and redesign various site the company has running on an old version of DNN. The site has been hacked and someone uploaded some directories and web.config files that were redirecting users to stream suspicious streaming sites. Also, the attacker added some scripts that show Google Ads on all the blog articles. Needless to say its a mess.
Nevertheless, I was able to go in there, deleted a super admin account (that's how they got in I think) , delete a few directories that had over a thousand html files for streaming sites and also deleted the old FCK Editor.
I am completely new to DNN and need some help with the directory and structure to try and see if I can resolve this. So far, I cannot get rid of the Google Ads in the blog and for the life of me I cannot find where the blog articles live inside the root/directory. When I go in there and delete the ads through the DNN UI the ads come back in hours or a couple of days. The directories with the html files have not returned. Just the ads.
I know that we have to upgrade but if I remove the ads I will have more to to develop the new sites without feeling rushed because of the current issue.
If anyone can point me in the right direction I would really appreciate it.
That sounds awful, as obviously someone (ore something) still manipulates your site from the outside (or inside?). There are a lot of issues on old DNN Versions, and the only thing I can really recommend is to find an upgrade path to the newest version. I don't know how big your site is, maybe it is easier to set it up from scratch with a new version (if the site is not too big).
The directory structure does not help you finding any content as everything is stored in a database. To be more specific I would need more information about the DNN version and the extensions (and versions of the extensions) in use, but disclosing this here in the public could be a security risk for you. You could write me a PM here if you wish to get in contact.
To find people (maybe in your area) who can help you could give these web sites a try: https://dnncommunity.org (Resources > Forums) and https://dodnn.work/.
It sounds like your site might have been impacted by a few different exploits, and most likely I would guess it is version 7.x or earlier and been upgraded from versions prior to that.
For the immediate need you are going to need to try and identify anything and everything that is out of the norm, this can be very daunting for those that are not familiar with the platform, but a few tips.
Look in the DB for data in the Header or Footer field of the TabModules table
Look for any rogue files that really should not be there, anything with an extension of (.php, .asp, etc.)
Look for rogue files in directories outside of the /Portals/* folder that don't match a DNN Install. (This takes a bit of personal experience.)
Look at your default.aspx file it should NOT have a recent modified date. If it does, compare it against the one that you get when you download the install package of that version of DNN
Now, once you have done this, be sure to do any mitigations that you can for known exploits. Including:
Delete /Install/Install.aspx
Delete /Install/InstallWizard.aspx
Disable any host account with a username of 'host' and create a new one if that was your only one
Feel free to email me directly as well if you need some help.
I've inherited the maintenance of a DotNetNuke (v6.2.0.1610) site, and one of the things I'd like to do is to tidy up the database tables being used.
It looks like there might have been two installations of DNN into the same database (I'm guessing, I don't know its history and cannot find out), I'm making this assumption because there are two sets of DotNetNuke tables.
For example, we have:
dbo.Portals, dbo.PortalSettings, dbo.Profile, dbo.Roles, etc.
However, then we also have the same set, prefixed with dnn_ -
dbo.dnn_Portals, dbo.dnn_PortalSettings, dbo.dnn_Profile, dbo.dnn_Roles, etc.
I spent a good while tearing my hair out when I could not get our portal to load, when I discovered it is because I was editing the dbo.PortalAlias table and I needed to be editing the dbo.dnn_PortalAlias table instead.
I wanted to avoid this future maintenance headache, so I backed up the database, and set about deleting all the tables without the dnn_ prefix (web.config specifies objectQualifier="dnn_"). I diligently ensured there was a matching dnn_ table before deleting any.
At first it seemed fine - the portal loaded and all the content was there, I thought I was on to a winner. However when I logged in and accessed the site admin section, that's when I started to get lots of error messages. So I figured I'd deleted too much, I restored the backup, and all is well - portal working again.
However, I really would like to get rid of the unnecessary tables, because no doubt at some point in the future I'll start doing some work on the database, forget about the dnn_ prefix and waste a bunch of time wondering why something isn't working.
So, as a bit of a DotNetNuke newbie, I'm after some help - how can I know what tables are in use, what aren't, and how can I set about tidying up the SQL Server tables? Thanks.
I suggest you to delete only the tables which have an equivalent with the "dnn_" prefix.
The DNN database should contain at least the "aspnet_" prefixed tables which are used for the authentication on the portals.
Then, you could have some extensions which could use tables without the "dnn_" prefix. It depends on the sql scripts that those extensions have used during their installation. I hope that those extensions don't run queries on the dnn tables without the "dnn_" prefix. Otherwise it could explain the errors you've encountered.
You could use the SQL Server Profiler to check it.
It turns out there was a view, dnn_Lists which was still referencing dbo.Lists without the dnn_ prefix.
I fixed this view and it's fine now.
(PS: Turns out that it's useful to set IsSuperUser = 1 in the users table for who you're logged in as, because then you get the full exception details and can fix it.)
Thanks
It would make sense to delete all tables WITHOUT "dnn_", but you said you got a problem.
If you have time and patience and is adamant about tiding things up, I would delete 1 table at a time and test the admin feature it broke last time until you find the culprit. That is a long shot, but that is how I would approach.
What might be happening here is that you may have a 3rd module installed that ignores the objectQualifier and when you deleted those tables, you then broke that module.
We are building a webapp which is shipped to several client as a debian package. Each client runs his own server. But the update and support is done by us.
We make regular releases of the product, with a clean version number. Most of the users get an automatic update (by Puppet), some others don't.
We want to keep a trace of the version of the application (in order to allow the user to check the version in an "about" section, and for our support to help the user more accurately).
We plan to store the version of the code and the version of the base in our database, and to keep the info up to date automatically.
Is that a good idea ?
The other alternative we see is a file.
EDIT : The code and database schema are updated together. ( if we update to version x.y.z , both code and database go to x.y.z )
Using a table to track every change to a schema as described in this post is a good practice that I'd definitely suggest to follow.
For the application, if it is shipped independently of the database (which is not clear to me), I'd embed a file in the package (and thus not use the database to store the version of the web application).
If not and thus if both the application and the database versions are maintained in sync, then I'd just use the information stored in the database.
As a general rule, I would have both, DB version and application version. The problem here is how "private" is the database. If the database is "private" to the application, and user never modifies the schema then your initial solution is fine. In my experience, databases which accumulate several years of data stop being private, it means that users add a table or two and access data using some reporting tool; from that point on the database is not exclusively used by the application any more.
UPDATE
One more thing to consider is users (application) not being able to connect to the DB and calling for support. For this case it would be better to have version, etc.. stored on file system.
Assuming there are no compelling reasons to go with one approach or the other, I think I'd go with keeping them in the database.
I'd put them in both places. Then when running your about function you quickly check that they are both the same, and if they aren't you can display extra information about the version mismatch. If they're the same then you will only need to display one of them.
I've generally found users can do "clever" things like revert databases back to old versions by manually copying directories around "because they can" so defensively dealing with it is always a good idea.
Assumption: live/production web app suppresses errors being shown to end-users.
Suppose your tech support team wants to see live data but through the eyes of the development-side of the application (maybe you want to see what errors are occurring, or want to see when you've got an issue fixed using an end-user's data).
Right now we've got one database serving both the dev and live boxes (not my idea - I know it's gross).
Ideas?
Edit: Best/handy tools for implementing your suggestion?
We replicate the data back to a different database. Yes, there is a delay, but it keeps people hands out of the production servers. This also allows us to "hide" information that tech support (and other people for that matter) aren't supposed to see.
In addition to replicating data down, on production, we see who's logged into the application, and if it's a member of the company, send them to the real error page versus the happy kitten playing with a ball of yarn apologizing.
Back up and restore from live to dev on a regular basis (once, twice a day). It doesn't need to be realtime (as you might be entering data from the dev side anyway, which could cause problems).
If you have PCI or HIPAA data, make sure you don't put that in your dev environment -- that might break laws.
I generally like to have a 3-tier system for web development:
Development
Testing
Live
Most of the time testing is an exact copy of the live system, except that errors are turned on, when a new version is about to be moved live it's replaced with the new version BEFORE live is, to detect upgrade issues.
Development is completely separate from live, to allow for major changes to things like the database, or changes to the production environment.
I would firstly make errors are either emailed to someone with details of how the user got there or at minimum logged so you can watch the error log while you perform similar actions to see if you get the same messages in the log.
And yes, copying the database on the dev server/site is probably your only option. You don't want any changes made by the development team to live data and you'll probably also have changes that won't work with the production database at some point.
I wouldn't recommend doing a nightly copy as a developer might be in the middle of some new feature where they have added data and then it's erased that night. I usually copy the production database(s) to dev each time a major version is released. This also allows me to do speed testing with a lot of live data. On some systems I also change everyones password to a default so I can login easily as any user.
If your configuration permits it:
a. Add a logging function (if there isn't one already) to write messages of interest to a log file.
b. Run the unix command
tail -f < logfile.txt
which will stream the growing log file to your console.
http://www.monkey.org/cgi-bin/man2html?tail
If you have Windows, you might try this:
http://tailforwin32.sourceforge.net/