Moving Magento to new server checklist - sql-server

So I'm trying to manually move Magento to new server with old school ftp/phpmyadmin method. I only have the sql dump and folders from the root of old server, not access to it anymore and I don't know the earlier magento version number.
Should I
do fresh install of Magento to new server and then substitute folders and somehow import sql?
or
dump the files and sql to the new server first and then run the installer (something else?)
Many thanks

Usually below methods are followed,
Empty your cache admin sessions
Empty your logs (mysql)
create backup your database and your files
check your new server that supports magento using this
upload your files and import your db. If your database size is very large then directly import your database to your mysql server using sql command. Command would be like this,
mysql -u username -p database_name < file.sql
for this you need to logged in your server(mysql)
Update your mysql credentials to your etc file and update path in core_config_data table .
Place some dummy orders to check emails are correctly delivered and payment, shipping etc.
Buy SSL Certificates and avoid shared hosting.

Related

Import/Export PostgreSQL db "without" pg_dump or sql file / backup, etc...?

I need to import a old db into a new postgre server.
Is there a way to migrate an old database to a new server without using pg_dump?
I don't have the sql file, or the old server backup file, neither the user and the password, just the physical files in the "\data" folder, is there any way to do this?
The target server is in the same version of th old server.
Thanks.
Well as a test you could try:
pg_ctl start -D $DATA
Where pg_ctl comes from the target version and the $DATA is the the /data directory. You have not said how you came to have just a /data directory. If this came from an unclean shutdown or a corrupted drive the possibility exists that the server will not start.
UPDATE
To get around auth failure find pg_hba.conf and create or modify local connection to use trust method. For more info see pg_hba and trust. Then you should be able to connect like:
psql -d some_db -U postgres
Once in you can use ALTER ROLE to change password:
ALTER ROLE <role_name> WITH PASSWORD 'new_password';

Database Link to ATP/ADW

I am having challenges in creating a database link from an ORACLE DBCS to an ORACLE ATP.
I am creating a database link from an ORACLE DBCS (PAAS) to an ORACLE ATP (Autonomous Transaction Processing) database. I can't seem to get the proper connection set-up for this. Anyone who has successfully been able to?
My connection to the ORACLE ATP with SQLDeveloper is a zipped Wallet.
CREATE DATABASE LINK TARGET_DB
CONNECT TO admin IDENTIFIED BY "Myp#ssword123!"
USING
'(DESCRIPTION=
(ADDRESS=
(PROTOCOL=tcps)
(HOST=99.99.99.99)
(PORT=1522))
(CONNECT_DATA=
(service_name=eoakbwd540pwkbi_myuseratp_high.atp.oraclecloud.com)))';
-- ip address and service names are fake
When I test the DB link using SQLDeveloper I get the ORA-28788 error code.
0. Setup
You start out with two instances:
DBCS - in my case Enterprise Edition/12.2 with port 1521 opened in the security lists
ATP Instance
Download the wallet zip-file from the ATP instance containing a tnsnames.ora, sqlnet.ora and some wallet files.
Then upload the unzipped files to your DBCS instance.
1. Wallet Configuration
On DBCS: Replace the sqlnet.ora and tnsnames.ora in the $ORACLE_HOME/network/admin folder with the ones from the zip file (might need to merge them if you have existing entries that are still needed).
Replace the WALLET_LOCATION in the sqlnet.ora file with the actual location of your wallet files (specifically the cwallet.sso and ewallet.p12). Make certain the permission are open for the oracle user.
2. Database Link
You have two options for the database link (that I know of). First get the service names (e.g. randomatp_high) from your tnsnames.ora file.
Using the username/password of your ATP admin user in the database link connection command
create database link <DBLinkName> connect to ADMIN identified by "<ATPpassword>" using '<ATPServiceName>';
Create two users with the same username and the same password in DBCS and ATP, connect to DBCS as that user and then:
create database link <DBLinkName> using '<ATPServiceName>';
You might need to use alter session set global_names=false; to help with ORA-02085 saying the database link is connected to a different DB.
3. Test
Test the database link with e.g.:
select banner from v$version#<DBLinkName>;

Automating import of data-tier application (SQL database) from Azure with a Master Key

When I extract a data-tier application from a Microsoft Azure SQL database that has a Master Key, I was unable to import it into SQL server on my local PC.
You will find others had this issue here: SSMS 2016 Error Importing Azure SQL v12 bacpac: master keys without password not supported
However the steps provided as the answer did not work on my installation.
Steps are
1. Disable auditing on the server (or database)
2. Drop the database master key with DROP MASTER KEY command.
Microsoft Tech Support verified this solution did not work on my installation of SQL Server and after actually taking remote control of my PC and trouble shooting, they were unable to determine why this was occurring.
I needed to find a way to remove the Master Key from the bacpac file. I have a Powershell script to remove the Master Key from the BACPAC file but it requires extracting, renaming files and running scripts from Windows Powershell to get the db imported.
Does anyone have a program or set of scripts which would automate the process of removing the Master Key and importing a SQL DB from Azure with a single command?
I am new to this forum. Please do not be harsh with this post. I am trying to do the best I can to help others to save the many hours I spent coming up with this.
I have cobbled together a T-SQL script which calls a Windows Powershell script (also cobbled from multiple sources) to extract a data-tier application (database) from Microsoft Azure SQL database and import it into a database on my local SQL Server by running ONE command. Over the months I found some of the code that is in my scripts from other blogs etc. I am not able to provide the credit due to those folks as I didn't keep track of where I got the info. If you are reading this and you see your code, please take credit. I apologize for not being able to give you the credit for your work.
There may be configuration settings on your PC and your local SQL server that need adjustments as this entire solution requires pretty much full access to your computer. If you run into trouble with compatibility, let me know and I will do the best I can to let you know how my system is configured in case it will help you.
I am using Windows 10 Pro and Microsoft SQL Server Developer (64-bit) v12.0.5207.0
I have placed the two files that do all the work on GitHub here: https://github.com/Wingloader/Auto-Azure-BACPAC-Download.git
GetNewBacpac-forGitHub.sql
GetAzureDB-forGitHub.ps1
WARNING: The Powershell script file will store your SQL sa password and your Azure SQL login in clear text!
If you don't want to do this, don't use this solution.
My computer is owned and controlled solely by me so I am able to open up the security in my system and I am willing to assume the responsibility of safeguarding it.
The basic steps of my solution are are accomplished as follows: (steps 1 and 2 are optional as I like to keep a version of the DB I am working with as of the point in time I pull down a clean production copy of my Azure DB)
Back up the current DB as MyLocalDB.bak.
Restore that backup from step 1 to a new DB with the previous day stamped at the end of the DB name (e.g., MyLocalDB20171231)
Delete the original MyLocalDB database (needed so we can recreate the DB with the original name later on)
Pull down the production database from Azure and create a new database with the name MyLocalDB.
The original DB is deleted in step 3 so that the restored DB can use the original name (important when you have data connections referring to that DB name)
In Step 4, the work of extracting the data-tier application DB from Azure is initiated by this line in the T-SQL:
EXEC MASTER..xp_cmdshell '%SystemRoot%\System32\WindowsPowerShell\v1.0\powershell.exe -File C:\Git\GetUpdatedAzureDB\GetAzureDB.ps1"'
The Powershell script does the following:
The target for the extract is a file named today.bacpac (hardcoded). The first thing to do is delete that file if it already exists.
Extract the DB from Azure into the today.bacpac file.
Note: my DB on Azure has a Master Key for encryption. This will need to be removed from the files prior to importing the bacpac file into your local DB or it will fail (this may not be required in SQL 2017 according to my previous conversations with MS Support). If you do not use a Master Key, you can either strip out the code that does this step or just leave it alone. It won't remove anything if it isn't there. It would just add a little overhead to the program.
Open the today.bacpac file (zip file) and remove the MasterKey node from the Origin.xml file.
Modify the Model.xml file to updates the SHA hash length. This is required in order for the file not to appear to have been tampered with when SQL opens the bacpac file.
Re-zips the files back into a new file today-patched.bacpac
Runs this line of code (from Powershell) to import the bacpac file into SQL Server
&C:\Program Files (x86)\Microsoft SQL Server\140\DAC\bin\SqlPackage.exe" /Action:Import /SourceFile:"C:\Git\GetUpdatedAzureDB\today-patched.bacpac" /TargetConnectionString:"Data Source=MyLocalSQLServer;User ID=sa; Password=MySAPassword; Initial Catalog=MyLocalDB; Integrated Security=false;"
After editing the two files to provide updated paths, usernames and passwords, run the SQL script. You do not need to edit the scripts again. You can run the SQL script again without modification and it will create a new copy of your Azure DB.
Done!

How to push data from local SQL Server to Tableau Server on AWS

We are developing Tableau dashboards and deploying the workbooks on a EC2 windows instance in AWS. One of the data source is the company SQL server inside firewall. The server is managed by IT and we only have read permission to one of the databases. Now the solution is to build workbook on Tableau desktop locally by connecting to the company SQL server. Before publishing the workbooks to Tableau server, the data are extracted from data sources. The static data got uploaded with workbooks when published.
Instead of linking to static extracted data on Tableau server, we would like to set up a database on AWS (e.g. Postgresql), probably on the same instance and push the data from company SQL server to AWS database.
There may be a way to push directly from SQL server to postgres on AWS. But since we don't have much control of the server plus the IT folks are probably not willing to push data to external, this will not be an option. What I can think of is as follows:
Set up Postgres on AWS instance and create the tables with same schemas as the ones in SQL server.
Extract data from SQL server and save as CSV files. One table per file.
Enable file system sharing on AWS windows instance. So the instance can read files from local file system directly.
Load data from CSV to Postgres tables.
Set up the data connection on Tableau Server on AWS to read data from Postgres.
I don't know if others have come across a situation like this and what their solutions are. But I think this is not a uncommon scenario. One change would be to have both local Tableau Desktop and AWS Tableau Server connect to Postgres on AWS. Not sure if local Tableau could access Postgres on AWS though.
We also want to automate the whole process as much as possible. On local server, I can probably run a Python script as cron job to frequently export data from SQL server and save to CSVs. On the server side, something similar will be run to load data from CSV to Postgres. If the files are big, though, it may be pretty slow to import data from CSV to postgres. But there is no better way to transfer files from local to AWS EC2 instance programmatically since it is Windows instance.
I am open to any suggestions.
A. Platform choice
If you use a database other than SQL Server on AWS (say Postgres), you need to perform one (or maybe two) conversions:
In the integration from on on-prem SQl Server to AWS database you need to map from SQL Server datatypes to postgres datatypes
I don't know much about Tableau, but if it is currently pointing at SQL Server, you probably need some kind of conversion to point it at Postgres
These two steps alone might make it worth your while to investigate a SQL Express RDS. SQL Express has no licencing cost but obviously windows does. You can also run SQL Express on Linux which would have no licencing costs, but would require a lot of fiddling about to get running (i.e. I doubt if there is a SQL Express Linux RDS available)
B. Integration Approach
Any process external to your network (i.e. on the cloud) that is pulling data from your network will need the firewall opened. Assuming this is not an option, that leaves us only with push from on-prem options
Just as an aside on this point, Power BI achieves it's desktop data integration by using a desktop 'gateway' that coordinates data transfer, meaning that cloud Power BI doesn't need to open a port to get what it needs, it uses the desktop gateway to push it out
Given that we only have push options, then we need something on-prem to push data out. Yes, this could be a cron job on Linux or a windows scheduled task. Please note, this is where you start creating shadow IT
To get data out of SQL Server to be pushed to the cloud, the easiest way is to use BCP.EXE to generate flat files. If these are going into a SQL Server, these should be native format (to save complexity). If these are going to Postgres they should be tab delimited
If these files are being uploaded to SQL Server, then it's just another BCP command to push native files into tables into SQL Server (prior to this you need to run SQLCMD.EXE command to truncate the target table
So for three tables, assuming you'd installed the free* SQL Server client tools, you'd have a batch file something like this:
REM STEP 1: Clear staging folder
DEL /Y C:\Staging\*.TXT
REM STEP 2: Generate the export files
BCP database.dbo.Table1 OUT C:\Staging\Table1.TXT -E -S LocalSQLServer -N
BCP database.dbo.Table2 OUT C:\Staging\Table2.TXT -E -S LocalSQLServer -N
BCP database.dbo.Table3 OUT C:\Staging\Table3.TXT -E -S LocalSQLServer -N
REM STEP 3: Clear target tables
REM Your SQL RDS is unlikely to support single sign on
REM so need to use user/pass here
SQLCMD -U username -P password -S RDSSQLServerName -d databasename -Q"TRUNCATE TABLE Table1; TRUNCATE TABLE Table2; TRUNCATE TABLE Table3;"
REM STEP 4: Push data in
BCP database.dbo.Table1 IN C:\Staging\Table1.TXT -U username -P password -S RDSSQLServerName-N
BCP database.dbo.Table2 IN C:\Staging\Table2.TXT -U username -P password -S RDSSQLServerName-N
BCP database.dbo.Table3 IN C:\Staging\Table3.TXT -U username -P password -S RDSSQLServerName-N
(I'm pretty sure that BCP and SQLCMD are free... not sure but you can certainly download the free SQL Server tools and see)
If you wanted to push to Postgres SQL instead,
in step 2, you'd need to drop the -N option, which would make the file text, tab delimited, readable by anything
in step 3 and step 4 you'd need to use the associated Postgres command line tool, but you'd need to deal with data types etc. (which can be a pain - ambiguous date formats alone are always a huge problem)
Also note here the AWS RDS instance is just another database with a hostname, login, password. The only thing you have to do is make sure the firewall is open on the AWS side to accept incoming connections from your IP Address
There are many more layers of sophistication you can build into your integration: differential replication, retries etc. but given the 'shadow IT status' this might not be worth it
Also be aware that I think AWS charges for data uploads, so if you are replicating a 1G database everyday, that's going to add up. (Azure doesn't charge for uploads but I'm sure you'll pay in some other way!)
For this type of problem I would strongly recommend use of SymmetricDS - https://www.symmetricds.org/
The main caveat is that the SQL Server would require the addition of some triggers to track changes but at that point SymmetricDS will handle the push of the data.
An alternative approach, similar to what you suggested, would be to have a script export the data into CSV files, upload them to S3, and then have a bucket event trigger on the S3 bucket that kicks off a Lambda to load the data when it arrives.

CPanel WHM setup website locally and download database

Can someone please help me downloading a database from CPanel ?
I have a website hosted using CPanel WHM.
The database is huge.
I want to make changes to the website. But would like to work on it locally.
So I downloaded my website content.
When I try to download the database, the download gets stopped in middle because of the huge size of the database. How can I download the full database?
Since the database is huge, it can't be downloaded through phpmyadmin.
To download only through phpmyadmin, additional parameters like: php execution time limit, mysql time limit, mysql cache size etc etc needs to increased in files like php.ini mysql.ini
Instead mysqldump can be used.
Login to your site using SSH.
Eg:
>>ssh user#your_website.com
>>Enter password: your_password
>>mysqldump -u [uname] -p[pass] [dbname] > [backupfile.sql]
[uname] Your database username
[pass] The password for your database (note there is no space between -p and the password)
[dbname] The name of your database
[backupfile.sql] The filename for your database backup

Resources