How to restore WP site missing the database? - database

I have acquired a poorly backed up site. The owner should have done an export but instead just copied the 3 folders and a few files at the top area. Can the site be recreated without the database, or will I need to start over?

I am afraid you will need to start over.
Without the database or an export it is practically impossible to recreate the site.
There is no physical storage of content inside the WordPress filestructure. In rare cases the physical MySQL files can be found and used to import but, again, this is something dependant on the export you received. If its only the wp-admin and so on there most likely nog going to be there.

Related

Access database split problems

I am trying to split an Access database where I work but I have encountered a few issues that I am struggling to resolve. If I can first explain the problem.
I work for a large multi-national company that has on-site IT support but does not support Access (so no help there)
There are 12 of us working in our section, we have an old and badly designed StockMaster database on the networked F drive. The problem is that it is only set up for single users, we have to take turns using it. We aren't a computer savvy bunch, we tend to run the same named queries on a daily basis
The database is only updated once per day, every morning we get a download from our colleagues in Amsterdam. I do not want to play around with this database as first of all I'm no expert and secondly if I break it, no one will fix it.
My plan is this;
I have created a new Access database StockMaster2 that imports the required tables. Using VB coded modules, is deletes the old then imports the new. Therefore every morning it replicates what is in the original database and it works fine.
My next step is to split the database, create the front end and distribute. This is where I'm having problems.
I created the original front end StockMaster2_fe.accde and placed it in the database folder on the F:\ drive. Does every user get their own copy of the front end? I copied and saved two more front ends (copy and paste in the same folder -> rename) namely StockMaster2_alan_fe and StockMaster2_ryan_fe and tested it. I told Ryan (who sits next to me) to find the front end named after him on the F:\ drive and open it whilst I was in ...alan_fe. We both went to run macros at the same time but he was kicked out as it gave me exclusive access.
What am i doing wrong? Why is it not allowing multiple access?
My problem is that due to strict administrator privileges I cannot download any software or access the command line, so anything that I do must be done in Access itself
I apologize for not seeing this post sooner to end your agony. There are two absolute main issues that must be resolved to get you on the right track. First, and perhaps the most important, is that your file has the name of StockMaster2_fe.accde. The extension, the accde, is the executable version. Design changes cannot be made to that version. The extension should say .accdb to provide you with all the flexibility to alter the database, create one database for back-end tables, and a second database for front-end objects to include queries, forms, reports, macros and modules. If you have the accdb version, then your work will start to get much easier.
Issue number two, if your team is not able to share the database, then that is a sign that the database, when first opening, is opening in Exclusive mode. This option can be changed in the Access Options, in the Advance menu, under Advanced section. Look for Default open mode. It should say Shared to have multiple users operating all at once.
A possible hidden issue that can be happening, is that the database has VBA code which informs the database to open exclusively. With your version of the accde, you will not be able to access that code or change how the database opens.
Let's break this down (only because I finished all my work already...):
My next step is to split the database, create the front end and distribute. This is where i'm having problems. I created the original front end StockMaster2_fe.accde and placed it in the database folder on the F drive. Does every user get their own copy of the front end?
Yes
I copied and saved two more front ends (copy and paste in the same folder -> rename) namely StockMaster2_alan_fe and StockMaster2_ryan_fe and tested it. I told Ryan (who sits next to me) to find the front end named after him on the F drive and open it whilst i was in ...alan_fe. We both went to run macros at the same time but he was kicked out as it gave me exclusive access. What am i doing wrong?
Ensure your back end contains only tables. Access is a "client-centric" database, which means when a query is run it pulls all of the data over the pipe to your local computer, does what it does, and then sends it back. So, make sure the back end has only tables and all the other jazz (macros, queries, etc...) are in the front end. Also, the front end will contain links to the back-end tables. All of your queries/macros/etc.. will reference these links, and not the tables in the back-end DB directly.
Why is it not allowing multiple access?
Also, make sure your table-locking scheme is multi-user friendly. If you're doing table locking, it will cause errors. If you're doing record locking, it probably won't.
My problem is that due to strict administrator privileges i cannot download any software or access the command line, so anything that i do must be done in Access itself.
Shouldn't be a problem at all.

How do I get permission to read/write access to database located in Resources inside sandboxed app?

I have database in the MyApp.app/Contents/Resources folder. I want at least to read data from it in the sandboxed app.
Now I have a lot of "deny file-write-data /Users/user/Desktop/MyApp.app/Contents/Resources/DB/app_db.db" in the Console.
I can't use com.apple.security.temporary-exception.files.home-relative-path.read-write because I don't want to rely upon end user location of the app.
In spite of warnings in the Console my app is working. Is there any entitlement to give access or I should to copy my db to the container during first start?
Even reading from a db can require the DB to write to create locks etc., and you don't have a lot of control of that (you may have some depending on the db). Just follow your own intuition - copy the db on first run to your container.
Also file a radar, it can't harm.

Silverlight - Copy files directly into where Isolated Storage physically store files

In our Silverlight business application, we need to cache very large files (100's MBs) into isolated storage. We distribute the files separately to be downloaded by users and then they can import those files into Isolated Storage through the application.
However, the Isolated Storage API seem to be very slow and it takes an hour to import about 500MB of data.
Given, that we're in a corporate environment where users trust us, I would like users to be able to copy the files directly into the physical location on their file system where Silverlight store files when using the API.
The location varies per OS, but that's ok. The problem however, is that Silverlight seems to store files in a somewhat cryptic way. If I go to my AppData\LocalLow\Microsoft\Silverlight\is, I can see some weirdly named folder that look like long Guid.
My question: is it possible to copy files directly in there, or is it going to upset Silverlight ?
From what I've been testing it will make stuff fail/act weird. We had some stuff we had to clear and even though we did delete the files to test how it worked the usedspace didn't drop. So there is some sort of register of which files are in IS and how big they are.
I think it would be paramount you find out why IS is so slow. Can you confirm its like that on all clients? Test some others. This should be brought up to Microsoft if it is the case. Possibly you can change your serailization schema and save smaller files? I would not advise trying to figure out Microsoft's temporary and volatile IS storage location.

SQLite vs.SQLCE Deployment

I am in the process of writing an offline-capable smartclient that will have syncing capability back to the main backend when a connection can be made. As a side note, I considered the Microsoft Sync Framework but since I'm really only going one-way I didn't feel it would buy me enough to justify it.
The question I have is related to SQLite vs. SQLCE and ClickOnce deployments. I've dealt with SQLite before (impressive little tool) and I've dealt with ClickOnce, but never together. If I setup an installer for my app via ClickOnce, how do I ensure during upgrades the local database doesn't get wiped out? Is it possible to upgrade the database (table structure, etc. if necessary) as part of the installer? Or is it better to use SQLCE for something like this? I definitely don't want to go the route of installing SQL Express or anything as the overhead would be far too high for what I am doing.
I can't speak about SQLLite, having never deployed it, but I do have some info about SQLCE.
First, you don't have to deploy it as a prerequisite. You can just include the dll's in your project. You can check this article which explains how. This gives you finite control over what version is being used, and you don't have to deal with installing it per se.
Second, I don't recommend that you deploy the database as a data file and let ClickOnce manage it. When you change that file, ClickOnce will publish it again and put it in the data directory. Then it will take the previous one and put it in the \pre subfolder, and if you have no code to handle that, your user will lose his data. So if you open the database file to look at the table structure, you might be unpleasantly surprised to get a phone call from your user about the data being gone.
If you need to retain the data between updates, I recommend you move the database to the [LocalApplicationData] folder the first time the application runs, and reference it there. Then if you need to do any updates to the structure, you can do them programmatically and control when they happen. This article explains how to do this and why.
The other advantage to putting the data in LocalApplicationData is that if the user has a problem and has to uninstall and reinstall the application, his data is retained.
Regardless of the embedded database you choose your database file (.sqlite or .sdf) will be a part of your project so you will be able to use "Build Action" and "Copy to Output Directory" properties of that file to control what happens with the file during the install/update.
If you choose "Do not copy" it will not copy the database file and if you choose "Copy if newer" it will only copy if you have a new version of your database file.
You will need to experiment a little but by using these two properties you can have full control of how and when your database file is deployed/updated...

Wordpress database migration

I've looked around the Wordpress forums about this and didn't find anything so I thought I might try here.
If you have a staging/dev Wordpress setup used for testing new pluging and such, how do you go about migrating the data in the staging database back to the production database? Is there a "Wordpress best practices" way to do this, or am I limited to having to manually migrate tables from one database to the other?
I have a script that mysqldumps a copy of my production Wordpress DB, restores it over my test Wordpress install & then corrects all the "production" settings & urls in the test DB.
Both my production & test databases live on the same server, but you could change the mysqldump settings to dump from a remote mysql server & restore to a local server quite easily.
Here are my scripts:
overwrite_test.coach_db_with_coache_db.sh
#!/bin/bash
dbUser="co*******"
dbPassword="*****"
dbSource="coach_production"
dbDest="coach_test"
tmpDumpFile="/tmp/$dbSource.sql"
mysqldump --add-drop-table --extended-insert --user=$dbUser --password=$dbPassword --routines --result-file=$tmpDumpFile $dbSource
mysql --user=$dbUser --password=$dbPassword $dbDest < $tmpDumpFile
mysql --user=$dbUser --password=$dbPassword $dbDest < /AdminScripts/change_coach_to_test.coach.sql
change_coach_to_test.coach.sql
-- Change all db references from #oldDomain to #newDomain
SET #oldDomain = 'coach.co.za';
SET #newDomain = 'test.coach.co.za';
SET #testUsersPassword = 'password';
UPDATE `wp_1_options` SET `option_value` = REPLACE(`option_value`,#oldDomain,#newDomain) WHERE `option_name` IN ('siteurl','home','fileupload_url');
UPDATE `wp_1_posts` SET `post_content` = REPLACE(`post_content`,#oldDomain,#newDomain);
UPDATE `wp_1_posts` SET `guid` = REPLACE(`guid`,#oldDomain,#newDomain);
UPDATE `wp_blogs` SET `domain` = #newDomain WHERE `domain` = #oldDomain;
UPDATE `wp_users` SET `user_pass` = MD5( #testUsersPassword );
-- Only valid for main wpmu site
UPDATE `wp_site` SET `domain` = #newDomain WHERE `domain` = #oldDomain;
Perhaps you are just looking for the wrong thing. Wouldn't a backup plugin handle this with ease? I know they exist for all the big CMS packages...
The two methods would be using the export/import feature under tools or copying the database. I email myself a copy of my production database weekly using the WordPress Database Backup plugin.
The import feature can be problematic for moving a wordpress blog as you have to configure your php.ini file often because the default value of files you can upload on a hosted php implementation tends to be too small by default.
I wanted to pull the database from my production wordpress website into an offline development copy of it on my desktop machine so I could modify the site and test it with a
full set of the existing blog content and history.
This proved to be problematic, as simply making an offline backup of the database and importing it into the local development database did not work.
Overcoming these problems in moving data from the production to the dev database can probably be used to go the other way as well - so I think you can just use these guidelines for what you want to do as well - just start with dev data and move it to prod.
The problems here were:
the permalink designations for the
blog posts are all stored in the
database as they would be for the
online version, but my offline copy
isn't at the domain address, instead
it is in the localhost directory. So
when I launch the site locally,
although the css formatting and
images are all in place (the image
links being relative), the actual
blog posts don't show up.
many of the links throughout the
site link back out to the internet,
so if you try to navigate to
archives, or comments, or
categories, or the main posts, you
get sent back out to the internet
instead of staying in the database
on the local machine.
To make sure I was doing this right, I blew away the wordpress install I had on my local machine and restarted from scratch.
Once I had a clean, new wordpress install and brand new default freshly created local database for it, I opened up the database in phpMyAdmin and took a look at the wp_posts
table. Inside there, each record (in other words, each post) has a column titled "guid", which shows the location of that post. For example, the first one in a fresh, default
install contains this "guid" value:
http://localhost/wordpress/?p=1
If you look in the wp_posts table of your online version, you'll see instead in this location the url to your site online.
You can't just import the tables wholesale into your local install, because you'll be importing all these outside references. It will make your local version impossible to navigate locally.
So, I created a backup copy of my online site's database and saved it locally as a .sql file. I then opened that file in a text editor (I used notepad++, a great piece of free software, but you could use any text editor). Things I needed to look out for:
For whatever reason, the tables on my
online site aren't just, for example,
"wp_posts" - they are
"wp_something_posts"... there are
some extra letters in there in the
table names.
Any references to http://... that contain my online url instead of localhost/wordpress
To keep it simple let's just do only the posts. In the backup copy of the .sql you've made of your online database, find the beginning of the wp_posts table. It will look something like this:
--
-- Table structure for table `wp_posts`
--
DROP TABLE IF EXISTS `wp_posts`;
CREATE TABLE `wp_posts` (
...and so on. Highlight everything above that up to just below the comment marking the beginning of the database at the top of the file (it will say -- Database: 'your database name') and delete it. Then go to the end of your wp_posts table, and delete everything after then end of it down to the bottom of the file. Now your file only contains your posts, and nothing else.
Save this as a separate document. Call it posts.sql or something like that.
Now, in this posts.sql file, you need to do two find/replaces actions.
Find every instance of the name of
the table wp_something_posts and
replace it with wp_posts. You only
need to do this if your backup copy
of your online database doesn't
match your clean local install as
far as the table names go. You want
whatever the table name is in this
file to match what your locally
installed wordpress database has as
this table name. If you don't make
these names match, you are just
going to end up importing the posts
into a new, differently named table,
which will be of no use to you at
all.
Find every instance of http://...
(replace the elipsis with your url)
and replace it with
http://localhost/wordpress (or
whatever the local url to your dev
version of the site is)
Now save this file again, to make sure you've got these changes set.
Now that you've done that, use phpMyAdmin to get into the wordpress database on your local machine, select the "import" tab and navigate the selector to the posts.sql file you just made, and then import it. This will pull all the data in that file into your local wp_posts table.
When that finishes, browse your local wordpress site. You'll see all your posts in there now. Hooray!
You may need to do something similar for a few other tables if you want to bring in your comments, tags, categories, and static pages you've created, etc.
I realize this is a convoluted process. There is probably a tool out there somewhere that makes this activity easier, and if someone knows of one I'd love to find out about it. If someone knows of a better way to do this manually than what I've described, I'd love to know that as well!
Until then, this is the way I figured out how to do it. Hopefully it helps get you going in the right direction.
You need to handle the serialized objects. Here is a client side HTML5 utility to handle it. Because it is all javascript, it's quite fast.
The alternative would be hooking a bash script into your deployment. So once the site is deployed, the db is backed up and deserialized with the new domain.
This about sums up the problems with the wordpress core architecture... but I wrote a plugin that solves the problems with domain names and absolute urls being stored in the database:
http://wordpress.org/extend/plugins/root-relative-urls/
This will solve the problems outlined by #oddbill. Though don't worry too much about the url being in the GUID column as that field is never used for link generation.
#markratledge provides a couple links to some lengthy documents that basically say this:
//export
mysqldump -u[username] -p[password] [database] > backup.sql
//import
mysql -u[username] -p[password] [database] < backup.sql
You'll want to exclude the comments/comments_meta tables if you push to production from staging so you don't lose all of your comments and trackbacks (#DavidLaing's approach will wipe those out.) And this assumes you only make content changes in your staging environment. If you want to make changes in production and your staging environment, you'll need to write scripts that sync the data instead of wholesale overwrite it... good luck on that task, may I suggest adding create & modified timestamp columns before you invest too much time with the current schema.
And finally, #RussellStuever's approach is suitable in most circumstances, just be sure to know when you are browsing your host mapped site versus your production site. And really be sure about it, because some browsers cache dns lookups for days until you physically close them and start a new process. At which point switching hosts may take some time, and switching frequently may get frustrating. And if you need to test with an iPhone, you'll need to publish the site live first, or use a good router that can remap outbound internet requests to local servers because you cannot modify hosts files on most mobile devices.
My plugin lets you develop and test from http://localhost/ or http://staging.server.local/ or http://www.production.com without any of the usual pitfalls. And then to migrate data, it's as simple as exporting and importing the data, no search & replace step or database setting tweaks necessary.
And don't rely on the import/export tool, it doesn't capture everything in typical wordpress installations, and still requires a needless search & replace step.

Resources