I need to transfer sales tables form an old magento database to a new one
How can i do that without deleting the records from the new database and which are the sales tables
Given how complex Magento's schema is, I would advise against directly writing into the database. If you still want to do it there is this article that might help you understand the process as well as all the tables involved. Generally speaking the tables are prefixed with 'sales_'.
Since Magento's core import/export functionality is limited to Products and Customers, your best option is probably to look for an extension that will do this, or write your own. Here is another question on SO that is related to this, providing a link to some paid extensions that do this.
The sales ID's shouldn't conflict, assuming you do intend to transfer your customers as well since there may also be new that correspond to the sales. To keep this simple and short, which this process really is once you know which tables to export, you do the following:
Export all customer_ and sales_ tables (these may have a prefix, and will look like something like yourprefix_customer.
Then, you make sure that the last orders ID is updated in the eav_entity_store table, to make sure that Magento created new orders with the correct ID's. You would do the same for the other 3 rows within eav_entity_store which are for invoices, shipping and creditmemos.
Here more detailed tutorial on this topic if needed.
I strongly recommend use a plugin for that situation.
This free plugin worked for me, just install through Magento Connect and then (after refresh cache) you'll see "Exporter" tab on menu with option for import and export.
Related
there is one big question in my mind
why do developer plugin use post meta for saving data on database ?
why don't use separate tables ?
i know that if you lot of data you will save data on seprate tables and data related post better save on wp_postmeta
What other reasons are there to store or not store data in post meta?
Ahh, why indeed? WordPress's meta tables can be slow to query and confusing to use. The string-only meta values present real problems when you use them to store numbers or datestamps, for just one example of how confusing they are.
They do permit developers to extend WordPress's data model to handle many imaginable applications, without extra tables (or worse, custom columns added to the users or posts tables). If it weren't for this extensibility I suspect nobody would have heard of WordPress in 2022.
But here's the thing. Most people who own sites or develop plugins (or themes) for the WordPress.org software ecosystem aren't proficient with designing or developing for SQL tables. It's easier for many to rely on the meta table instead.
Some plugins (Yoast, Relevanssi, WooCommerce for example) have their own tables, and your plugin can have them too if you need them.
If you will publish a plugin like that you must include code to create your tables when your user first activate your plugin, and drop them when she deletes your plugin. And you need to test those cases carefully, lest you leave junk behind in your users' databases.
You must be careful to use the right $wpdb->prefix for your table names (or your plugin will collapse in a heap of digital rubble on multisite installations). To avoid SQL injection attacks you must use $wpdb->prepare(). And there are other things to keep in mind. Study up on the $wpdb class.
Problem:
Installing wordpress plugins to test their effectiveness, sometimes creates additional tables on database. After testing some plugins, it becomes difficult to quickly identify and delete not needed tables created by them.
What I want:
I want to add my needed tables to phpmyadmin's favorite list so that I can quickly identify newly created tables (tables without yellow star) so that I can drop those tables easily.
Question:
Is there a way to make phpmyadmin remember that favorite list without enabling phpMyAdmin configuration storage?
Or is there any other method that makes it easy to identify newly generated tables from old tables?
Using the phpMyAdmin Configuration Storage is really the only way to enable and manage the favorite tables. Is there some reason you don't want to configure it? You can set it up in another database or the same database or a handful of other ways meant to make it easy for all sorts of situations (shared hosting, limited rights, etc) so hopefully there's a solution that works for you. If not, there may be some other tool that helps you track changes.
We have implemented Sitecore for our future site. We are going live soon with our corporate site. We had some testing done on this site to make sure everything was working right. There is some data that was written to the Analytics database due to this.
Now we are looking to get rid of this data and start fresh. We want to go the route of truncating the data out of the tables. Is there a way to do this? I know this could be done with a built-in functionality for the OMS but not the DMS. Also, what tables would be safe to truncate.
Thanks
Personally, I would start fresh by attaching an empty DMS database. You will need to redeploy any goals, page events, campaigns, etc., but it's a much safer option than truncating the tables.
Another thing to consider is that most (if not all) of the reports in the DMS are setup to accept a start and end date. Simply running your reports starting from the launch date may be all you need.
If you decide to truncate the tables, I would focus on any tables that have a Foreign Key relationship to the Visits table (the DMS ships with a Database Diagram that's really handy for stuff like this). Going in order that would be the PageEvents, Pages, and Profiles tables. Then it should be safe to clear out the Visits table.
I'm looking for the fastest-to-success alternative solution for related data migration between Salesforce environments with some specific technical requirements. We were using Apatar, which worked fine in testing, but late in the game it has started throwing the dreaded socket "connection reset" errors and we have not been able to resolve it - it has some other issues that are leading me to ditch it.
I need to move a modest amount of data (about 10k rows total) between several sandboxes and ultimately to a production environment. The data is spread across eight custom objects. There is a four-level-deep master-detail relationship, which obviously must be preserved.
The target environment tables are 100% empty.
The trickiest object has a master-detail and two lookup fields.
Ideally, the data from one table near the top of the hierarchy should be filtered by a simple WHERE, and then children not under matching rows not migrated, but I'll settle for a solution that migrates all the data wholesale.
My fallback in this situation is going to be good old Data Loader, but it's not ideal because our schema is locked down and does not contain external ID fields, so scripting a solution that preserves all the M-D and lookups will take a while and be more error prone than I'd like.
It's been a long time since I've done a survey of the tools available, and don't have much time to do one now, so I'm appealing to the crowd. I need an app that will be simple (able to configure and test very quickly), repeatable, and rock-solid.
I've always pictured an SFDC data migration app that you can just check off eight checkboxes from a source environment, point it to a destination environment, and it just works, preserving all your relationships. That would be perfect. Free would be nice too. Does such a shiny thing exist?
Sesame Relational Junction seems to best match what you're looking for. I haven't used it, though; so, I can't comment on its effectiveness for what you're attempting.
The other route you may want to look into is using the Bulk API or using the Data Loader CLI with Task Scheduling.
You may find this information (below), from an answer to a different question, helpful.
Here is a list of integration services (other than Apatar):
Informatica Cloud
Cast Iron
SnapLogic
Boomi
JitterBit
Sesame Relational Junction
Information on other tools, to integrate Salesforce with other databases, is available here:
Salesforce Web Services API
Salesforce Bulk API
Relational Junction has a unique feature set that supports cloning, splitting, and merging of Salesforce orgs, and will keep the relationships intact in a one-pass load. It works like this:
Download source org to an empty database schema (any relationship DBMS)
Download target org to a second empty database schema
Run some scripts to condition the data; this varies by object. Sesame provides guidance and sample scripts, but essentially you have to set a control field to tell Relational Junction to create or update Salesforce. This is also where you may need to replace source ID's with target ID's if some objects have been pre-populated during sandbox creation
Replicate the second database to the target org
Relational Junction handles the socket disconnects, timeouts, and whatever havoc happens during the unload/reload process gracefully and without creating duplicates.
This process was developed for a proof of concept at a large Silicon Valley network vendor in 2007, who became a customer. The entire down and up of 15 GB of data took 46 hours, plus about 2 days of preparation.
Sorry if this seems a little crazy but ive been messing around with NHibernate for a while and have come accross a scenario that may not be possible to solve with NHibernate...
I have 1 database that contains a load of static data, imagine it like a huge product lookup database, just to point out this is an example scenario, my actual one is a bit more complex but similar principle to this... It is hosted on a completely different box so i cant do "database2.table1.somecolumn" which i noticed as a possible way round the issue if the 2 DBs were in the same box and server.
Anyway i also have another DB which contains data relating to users, so imagine a user has bought a load of stuff from Generic Website A, you have a list of IDs pertaining to what they have bought and an amount of how many they bought as well as some other information, but the actual data relating to the product is stored in the other database...
So if you imagine you want to combine this data into a PreviousPurchasedProduct model which contained all the information from the 1st database and the additional data from the 2nd db you would have to do a query similar to this: (if they were all on one box)
SELECT db1.products., db2.purchases.
FROM db2.purchases
INNER JOIN db1.products ON db2.purchases.product_id = db1.products.id
WHERE db2.purchases.user_id = XXX;
Now first of all is it possible to map this sort of thing up even though they are in seperate DB hosts, im guessing not and if thats the case can you achieve this flexibility via a child class. So having a product class that purely works off db1 and a derived class that takes the purchases info that only works from db2.
Also is it possible to restrict the db1 portion of data from INSERT/UPDATE/DELETE statements, im pretty sure you can in the default mappings but as this would be outside of the per class scope im not sure what flexibility i have...
Thanks for reading my waffle :D
I would recommend you first understand how to solve this problem without NHibernate. Once you have one or more clean solutions that work without NHibernate, come back and update your question to say something like "how do I represent this SQL in NHibernate?".
Querying across databases can quickly bring database vendor specific quirks into play and you never mention which database vendor(s) you are dealing with. If both databases are the same vendor, you may be able to link the databases somehow using database vendor specific techniques (but linking isn't necessarily a good solution, so you'll want to try to uncover alternatives as well).