How to preserve data disk from a generalized Azure RM VM image - sql-server

I've created an Azure RM VM image of Windows 2008 R2 with SQL Server 2014 installed. The image was created with a data disk where I placed the SQL Server data directory (location for the system databases, error logs etc). The image was sysprepped then generalized, all successfully.
I created a new VM from the above image, pointing to the OS and data disk URIs. The VM gets created but I have to go into Computer Management > Disk Management and provision the drive from the presented volume. Since SQL Server's startup process is looking for the errorlogs, system databases etc, which do not exist there, it's basically a failed install.
Is there a way to preserve the data on the data disk, then provision that into Windows, programatically?

Is there a way to preserve the data on the data disk, then provision
that into Windows, programatically?
Yes, you could. You could use Azure PowerShell to create an image of a generalized Azure VM. You can then use the image to create another VM. The image includes the OS disk and the data disks that are attached to the virtual machine. I have tested in my lab, it works for me.
Stop-AzureRmVM -ResourceGroupName shuitest1 -Name shui -Force
Set-AzureRmVm -ResourceGroupName shuitest1 -Name shui -Generalized
$vm = Get-AzureRmVM -ResourceGroupName shuitest1 -Name shui -Status
$vm.Statuses
Save-AzureRmVMImage -ResourceGroupName shuitest1 -Name shui -DestinationContainerName "shuitest" -VHDNamePrefix "shuitest" -Path "D:\Filename.json"
More information about how to capture a VM image from a generalized Azure VM please refer to this link.
You could use the image(contains OS disk and data disks but no virtual network in it) to deploy your VM. More information about how to create a VM from a generalized managed VM image please refer to this link.
Also, you could use local json file to deploy your VM, you need to create a NIC on Azure Portal. If you use the way to deploy VM, the VM does not have a Public IP, you need to add it manual. I test in my lab, it works for me. If possible, I suggest you use local json file to redeploy your VM. The following is my cmdlet.
New-AzureRmResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName shuitest1 -TemplateFile "D:\Filename.json"

Related

How to reference exterior SQL storage with Docker container

Being a noobie to Docker, and thinking about storage with a SQL Server that has a size of several hundred gigabytes or more, it doesn't make sense to me that it would be feasible to store that much in a container. It takes time to load a large file and the sensible location for a file in the terabyte range would be to mount it separately from the container.
After several days attempting to google this information, it seemed more logical to ask the community. Here's hoping a picture is worth 1000 words.
How can a SQL Server container mount an exterior SQL Server source (mdf,ldf,ndf) given these sources are on Fortress (see screen shot) and the docker container is elsewhere on say somewhere in one of the clouds? Similarly, Fortress could also be a cloud location.
Example:
SQL CONTAINER 192.169.20.101
SQL Database Files 192.168.10.101
Currently, as is, the .mdf, .ldf files are located in the container. They should connect to another location that is NOT in the container. It would also be great to know how to move that backup file out of the "/var/opt/mssql/data/xxxx.bak" to a location on my Windows machine.
the sensible location for a file in the terabyte range would be to mount it separately from the container
Yes. Also when you update a SQL Server you replace the container.
This updates the SQL Server image for any new containers you create,
but it does not update SQL Server in any running containers. To do
this, you must create a new container with the latest SQL Server
container image and migrate your data to that new container.
Upgrade SQL Server in containers
So read about Docker Volumes, and how to use them with SQL Server.
Open a copy of Visual Studio Code and open the terminal
How to access /var/lib/docker in windows 10 docker desktop? - the link explains how to get to the linux bash command from within VSCode
docker run -it --privileged --pid=host debian nsenter -t 1 -m -u -i sh
cd .. until you reach the root using VSCode and do a find command
find / -name "*.mdf"
This lists a file name, in my case as: /var/lib/docker/overlay2/merged/var/opt/mssql/data as the storage location
Add a storage location on your Windows machine using the docker-compose.yml file
version: "3.7"
services:
docs:
build:
context: .
dockerfile: Dockerfile
target: dev
ports:
- 8000:8000
volumes:
- ./:/app
- shared-drive-1:/your_directory_within_container
volumes:
shared-drive-1:
driver_opts:
type: cifs
o: "username=574677,password=P#sw0rd"
device: "//192.168.3.126/..."
Copy the source files to the volume in the shared drive (found here at /var/lib/docker/overlay2/volumes/) I needed to go to VSCode again for root.
Open SSMS to the SQL Instance in docker and change the file locations (you'll detach them and then swap them with commands to the volume where the files were moved) https://mssqlfun.com/2015/05/18/how-to-move-msdb-model-sql-server-system-databases/
Using the VSCode again, go to the root and enable the mssql login to have permission to the data folder under /var/opt/docker/volumes/Fortress (not sure how to do this but working on it and will update here later if it can be done and otherwise I will remove my answer)
Using the SSMS again, and the new permissions, attach the mdf/ldf again to the docker container SQL Server
Finally, there is a great link here explaining how to pass files back and forth between a container and a windows machine hosting the container

How can I determine user / group when running an application?

I'm running nextCloud on my own Virtual Private Server with Ubuntu 16 + Plesk. I have a weird behavior which I suspect is related to files access rights:
- Configured an external storage (secondary HDD, mounted in /media as /diskext) as the "shared company repository". All users should have access to this repository.
- Verified that the shared NC folder has the proper rights by chown ncadmin:psacln, where pascln is Plesk default execution group.
- When accessing nextCloud from my desktop, I have access to the shared repository.
- Other colleagues with the same privileges have no access!
Therefore I'd like to determine what are the user/group used by nextCloud when trying to access the repository as user1, user2 or user15. I have a root SSH access to the server, so can run command line...
Thanks in advance for any help.
Nextcloud uses the user under which the PHP process runs to access the filesystem.
E.g. when you are using Apache and mod_php on ubuntu this is www-data.
To check which user this is on your system create the file phpinfo.php in /var/www/html with the following contents:
<?php
phpinfo();
Then go to the http://<ip-address-here>/phpinfo.php url, you will find the user under Environment.
Then you can change the user of the disk by running chown -R user:psacln.
Make sure to remove the phpinfo.php file since this may contain some sensitive values.

Automate Azure VM server SQL job backup copy to another server?

I Have Azure VM server. In that I have a job set up for automatic backup to Azure local storage. I will need to store a copy of that backup in another server? How do I do that? Is there any way to do it automatically?
I am not sure of you can do it directly from one serve to another server but you can do via blob storage. Use AzCopy(https://learn.microsoft.com/en-us/azure/storage/common/storage-use-azcopy) for uploading and downloading files from blobs.
You can also use Azure File Service to copy the backups for archival purposes. Use the following commands to mount a network drive to archive the backup:
Create a storage account in Windows Azure PowerShell
New-AzureStorageAccount -StorageAccountName “tpch1tbbackup” -Location “West US”
Create a storage context
$storageAccountName = “tpch1tbbackup”
$storageKey = Get-AzureStorageKey $storageAccountName | %{$_.Primary}
$context = New-AzureStorageContext -StorageAccountName $storageAccountName -StorageAccountKey $storageKey
Create a share
New-AzureStorageShare -Name backup -Context $context
Attach the share to your Azure VM
net use x: \tpch1tbbackup.file.core.windows.netbackup /u:$storageAccountName $storageKey
Xcopy backup to x: to offload the backup on the mounted drive
xcopy D:backup*.* X:tpchbackup*.*
According to you question, you can achieve this with many ways. As #Alberto Morillo and #Vivek said, you can use powershell and AzCopy to do that . But if you want to back up and copy automatically, you can use a runbook to achieve that.
Also, you can set Schedules to runbook. With this, you can backup your resource automatically. Runbook can run powershell cmdlets and provide many features to make your job automatically.
See more details about Runbook in Azure automation in this document.
To automate you backup process from one server to another server using azure storage service, you have to make three batch files.
It will take backup of your db and will store it locally. here is the command to do that.
set FileName=DBName_%date:~-4,4%%date:~-10,2%%date:~-7,2%.bacpac
echo %FileName%
"C:\Program Files (x86)\Microsoft SQL Server\140\DAC\bin\sqlpackage.exe" /a:Export /ssn:your IP(00:00:00) /sdn:yourdatabaseName /tf:"D:\%FileName%" /su:username /sp:"password"
It will post your locally saved backup file to Azure storage.
"C:\AzCopy\AzCopy.exe" /Source:D: /Dest:https://youstoragedestination.link/blobname/ /DestKey:yourAzureStoragekey /Pattern:"*.bacpac" /Y
del "d:*.bacpac"
from this batch file call above two batch file
example:
call "yourpath\backupFile.bat"
call "youpath\backupFilepushing2azure.bat"
you can schedule your third batch file to automate your process.
Now you have pushed your backup file to Azure Storage, which is enough i think.
If you really want to save that backup file to another server then make another batch file that will download your backup file from blob to server using AzCopy.

Sharing temporary files between users

I am building a web application deployment script in PowerShell, and for one task I am trying to create and restore a database from a Sql-Server backup file.
The files end up being in the user's desktop, so when I instruct Sql-Server to restore it, it complains with an 'Access is denied.' error when reading the backup.
RESTORE DATABASE [Acme] FROM DISK = 'C:\Users\matthew\Desktop\my-database.bak' WITH REPLACE
Responds with
Msg 3201, Level 16, State 2, Line 2
Cannot open backup device 'C:\Users\matthew\Desktop\my-database.bak'. Operating system error 5(Access is denied.).
Moving the file to a publicly accessible area like C:\Temp works, as indicated in the following answer: Why can't I read from .BAK files on my Desktop using SQL Express in Windows Authentication Mode
However, C:\Temp is not a standard Windows temp directory. Since I am using PowerShell, I am leveraging .NET libraries, such as using GetTempPath. This ends up pointing to
C:\Users\matthew\AppData\Local\Temp
which still has the same permission problem.
Is there a standard way to get a temporary directory that any local user can access?
EDIT: to clarify, the user matthew and the user that is restoring the backup are different.
It's not uncommon to create a folder C:\Temp as a system-wide temp directory. For a backup/restore scenario you just need a folder that's accessible by both the backup and the restore user, be it a custom folder, a built-in public folder like C:\Users\Public\Documents, or adjusted permissions on a userprofile.
However, from a security point of view it's probably a good idea to create a dedicated folder (e.g. C:\backup) to which only the required users have access, e.g. like this:
$backupDir = 'C:\backup'
$backupUser = 'DOMAIN\userA'
$restoreUser = 'DOMAIN\userB'
function New-Ace {
[CmdletBinding()]
Param(
[Parameter(Mandatory=$true)]
[string]$User,
[Parameter(Mandatory=$true)]
[string]$Access
)
New-Object Security.AccessControl.FileSystemAccessRule ($User, $Access,
'ObjectInherit, ContainerInherit', 'None', 'Allow')
}
$dir = New-Item -Type Directory $backupDir
$acl = Get-Acl -Path $dir
$acl.SetAccessRuleProtection($true, $false)
$acl.AddAccessRule((New-Ace -User $backupUser -Access 'Modify'))
$acl.AddAccessRule((New-Ace -User $restoreUser -Access 'Read'))
$acl | Set-Acl -Path $dir

Restart remote client machine

All client machine's connected to server via open vpn. Also all of the client machine has set custon winlogon shell settings, To run only myapp.exe.
So, the desktop or anyother explorer cannot be visible unless from taskmanager by using ctrl_shift_esc.
One of the client machine has stopped the myapp.exe, and wanted to restrt the machine. So from server done an RDP using open vpn IP, But unfortunately ctrl_shift_esc is not working to start the taskmanager.
Is there any way to restart this client machine from server machine. As no other tool is available in server to restrt this machine. They connected only via openvpn.
Regards
If PowerShell is enabled on the target machine you can remotely reboot it with powershell command.
PS C:\> Restart-Computer <hostname or IP> -whatif
Also you can restart multiple computers in single line of command
PS C:\> Restart-Computer "hostname1", "hostname2" -whatif
If someone logged in to target computer you can use -force parameter to force reboot.
-WhatIf parameter is used to verify the command.
please have a look at this link http://technet.microsoft.com/en-us/library/hh849837.aspx
The less sophisticated yet time-proven solution is to use the free handy utility called Wizmo by GRC (the same guy who made ShieldUp)
Once downloaded and deployed, you can cause a reboot via calling,
wizmo reboot
Note: not wizmo restart. because it's so easy, you can create a shortcut, or add it to PATH and call from the terminal. Too easy!

Resources