Error with storage when using NEW disk type in Terraform SQL Server virtual machine setup - sql-server

I'm trying to add the SQL Server "data"/ "log" disk by using Terraform.
There are 3 data disk created for the VM with luns 0,1,2 each set to Standard SSD with 128G of space in Terraform
The following snippet is what I am currently trying to run.
resource "azurerm_mssql_virtual_machine" "sqlconfig" {
virtual_machine_id = azurerm_windows_virtual_machine.vm[var.HOSTNAME_MICROSOFTSQL].id
sql_license_type = "PAYG"
sql_connectivity_port = *redacted
sql_connectivity_type = *redacted
sql_connectivity_update_password = *redacted
sql_connectivity_update_username = *redacted
sql_instance {
collation = "SQL_Latin1_General_CP1_CI_AS"
}
storage_configuration {
disk_type = "NEW"
storage_workload_type = "OLTP"
data_settings {
default_file_path = "F:\\Data"
luns = [0]
}
log_settings {
default_file_path = "G:\\Log"
luns = [1]
}
temp_db_settings {
default_file_path = "G:\\tempDb"
luns = [1]
data_file_count = 8
data_file_size_mb = 8
data_file_growth_in_mb = 64
log_file_size_mb = 8
log_file_growth_mb = 64
}
}
However, I am getting the following error when deploying from azure devops pipeline
polling after CreateOrUpdate: Code="Ext_StorageConfigurationSettingsError" Message="Error: 'Number of disks found do not match the expected count for creating Storage Pool, found :0 target: 1. Detail: Disk with LUN number 0 cannot be pooled. Reason : Insufficient Capacity'"
What does this error mean?

DATA volume seems compatible with PremiumSSD and UltraDisks only. Same for LOGS. Try to change the disk type.
To auick validate, try to create your VM via Azure Portal, when configuring Storage you should only be able to select Ultra or PXX disks.
Hope it helps.

It appears that within Terraform, when creating the Datadisks and attaching them to the VM before to running the section on azurerm_mssql_virtual_machine, it runs into some kind race condition between setting up the disk/attachment and storage configuration within azurerm_mssql_virtual_machine. This problem is solved once we added the following block of code under azurerm_mssql_virtual_machine:
depends_on = [
azurerm_virtual_machine_data_disk_attachment.[disk_attachment_name]
]

Related

Azure Terraform import bacpac into SQL Server with public network access disabled

I have a SQL Server in Azure with public network access disabled
resource "azurerm_mssql_server" "sql_server" {
name = var.db-server-name
resource_group_name = var.resource_group
location = var.location
version = "12.0"
administrator_login = local.login
administrator_login_password = local.password
minimum_tls_version = "1.2"
public_network_access_enabled = false
tags = var.tags
}
For accessing the server, I am creating a private endpoint:
resource "azurerm_private_endpoint" "sqlserver_private_endpoint" {
name = "sqlserver-private-endpoint"
location = var.location
resource_group_name = var.resource_group
subnet_id = azurerm_subnet.db_subnet.id
private_service_connection {
name = "sqlserver-psc"
is_manual_connection = false
private_connection_resource_id = azurerm_mssql_server.sql_server.id
subresource_names = ["sqlServer"]
}
tags = var.tags
}
I am then trying to create/import a database from Blob-Storage
resource "azurerm_mssql_database" "sql_server_database" {
name = var.db-name
server_id = azurerm_mssql_server.sql_server.id
collation = "SQL_Latin1_General_CP1_CI_AS"
auto_pause_delay_in_minutes = 60
max_size_gb = 32
min_capacity = 0.5
read_replica_count = 0
read_scale = false
sku_name = "GP_S_Gen5_1"
zone_redundant = false
import {
storage_uri = var.storage-url
storage_key = var.storage-key
storage_key_type = "StorageAccessKey"
administrator_login = azurerm_mssql_server.sql_server.administrator_login
administrator_login_password = azurerm_mssql_server.sql_server.administrator_login_password
authentication_type = "Sql"
}
}
With this setup, I get the following error
Error: while import bacpac into the new database test-db (Resource Group Test-dev): Code="ImportExportJobError" Message="The ImportExport operation with Request Id '1b005b56-bccd-4484-a5e0-c2495834798a' failed due to 'The SQL instance is inaccessible because the public network interface is denied (Error 47073). Please enable public network access on the SQL Server or configure Import/Export to use Private Link per https://docs.microsoft.com/en-us/azure/azure-sql/database/database-import-export-private-link.'."
with module.sql_server.azurerm_mssql_database.sql_server_database, on Modules\SqlServer\main.tf line 69, in resource "azurerm_mssql_database" "sql_server_database":
69: resource "azurerm_mssql_database" "sql_server_database" {
This error makes sense as I have set public_network_access_enabled = false on my SQL Server.
For security reasons I would not like to set
public_network_access_enabled = true
so my question would be: is there a possibility to import the database without enabling network access on the server?
Here I found a way to import the database using PowerShell, which should create a Privatelink for importing, but using this the Database would not be created using Terraform and would not be managed through the Terraform-State...
So does someone know of a way to import the database using Terraform with public_network_access_enabled = false?
(AzureRM Provider Version: 3.31)
Using Private Link for importing seems to be still in Preview. Preview features are rarely (or never) supported with Terraform so I would search alternatives for now.
I personally would create the database with Terraform and then manually (or with some CI/CD magic) import the required data. Generally Terraform is a bit clunky tool in managing what happens inside a database and I personally like using other tools for it.

Ktor with Gradle run configuration "Could not resolve substitution to a value" from environment variables

I have set up a server in Ktor with a Postgres database in Docker, but figured it would be useful to be able to develop the server locally without rebuilding the docker container each time.
In application.conf I have
// ...
db {
jdbcUrl = ${DATABASE_URL}
dbDriver = "org.postgresql.Driver"
dbDriver = ${?DATABASE_DRIVER}
dbUser = ${DATABASE_USER}
dbPassword = ${DATABASE_PASSWORD}
}
and in my DatabaseFactory I have
object DatabaseFactory {
private val appConfig = HoconApplicationConfig(ConfigFactory.load())
private val dbUrl = appConfig.property("db.jdbcUrl").getString()
private val dbDriver = appConfig.property("db.dbDriver").getString()
private val dbUser = appConfig.property("db.dbUser").getString()
private val dbPassword = appConfig.property("db.dbPassword").getString()
fun init() {
Database.connect(hikari())
transaction {
val flyway = Flyway.configure().dataSource(dbUrl, dbUser, dbPassword).load()
flyway.migrate()
}
}
private fun hikari(): HikariDataSource {
val config = HikariConfig()
config.driverClassName = dbDriver
config.jdbcUrl = dbUrl
config.username = dbUser
config.password = dbPassword
config.maximumPoolSize = 3
config.isAutoCommit = false
config.transactionIsolation = "TRANSACTION_REPEATABLE_READ"
config.validate()
return HikariDataSource(config)
}
suspend fun <T> dbQuery(block: () -> T): T =
withContext(Dispatchers.IO) {
transaction { block() }
}
}
I have edited the Gradle run configuration with the following environment config:
DATABASE_URL=jdbc:h2:mem:default;DATABASE_DRIVER=org.h2.Driver;DATABASE_USER=test;DATABASE_PASSWORD=password
When I run the task I get this error: Could not resolve substitution to a value: ${DATABASE_URL}, but if I set a breakpoint on the first line (private val appConfig) and evaluate System.getenv("DATABASE_URL") it is resolved to the correct value.
My questions are:
Why does this not work?
What is the best (or: a good) setup for developing the server without packing it in a container? Preferably without running the database in another container.
You need to specify:
appConfig.property("ktor.db.jdbcUrl").getString()
I found that setting environment variables for the task in gradle.config.kts works:
tasks {
"run"(JavaExec::class) {
environment("DATABASE_URL", "jdbc:postgresql://localhost:5432/test")
environment("DATABASE_USER", "test")
environment("DATABASE_PASSWORD", "password")
}
}
(source: Setting environment variables in build.gradle.kts)
As for why my initial approach only works in debug mode I have no idea.
As for question #2 I have a suspicion that H2 and Postgres could have some syntactic differences that will cause trouble. Running the database container in the background works fine for now.
As answered by #leonardo-freitas we need to specify ktor. first before accessing application.conf properties.
environment.config.property("ktor.db.jdbcUrl").getString()
This is missing on the official doc as well https://ktor.io/docs/jwt.html#validate-payload
I experienced the same problem. I had to manually restart the IntelliJ (NOT via File). Simply closed the IDE, and then turned it on again. Also, check that your environment variable is permanent.

Deploy OVA to VCenter with Terraform

I am by no means a knowledgeable VMWare user at this point. I think this might just be a case where I just don't understand some essential concepts.
I'm trying to deploy a VM into a VCenter, I have an OVA (template?) that I want to deploy with.
Currently I have unpacked the OVA, uploaded the VMDKs I found therein to a datastore, and then used this terraform definition:
provider "vsphere" {
user = "${var.vsphere_user}"
password = "${var.vsphere_password}"
vsphere_server = "${var.vsphere_server}"
allow_unverified_ssl = true
}
resource "vsphere_virtual_machine" "primary" {
name = "myvm"
vcpu = 2
memory = 16384
datacenter = "${var.vsphere_datacenter}"
resource_pool = "/DATA_CENTER/host/10.132.260.000"
network_interface = {
label = "Private Network - vmnic0 vmnic2"
ipv4_address = "10.132.260.001"
ipv4_gateway = "10.132.260.002"
ipv4_prefix_length = 26
}
disk {
datastore = "datastore1"
vmdk = "/path/to/vmdk/"
bootable = true
type = "thin"
}
}
Which gets stuck, because it can't open the VMDK.
When I deploy the OVA with ovftool the vmdk that the vm is deployed with is very different.
An error was received from the ESX host while powering on VM myvm.
Failed to start the virtual machine. Module DiskEarly power on failed.
Cannot open the disk
'/vmfs/volumes/557fc17b-c078f45c-f5bf-002590faf644/template_folder/my_vm.vmdk'
or one of the snapshot disks it depends on. The file specified is not
a virtual disk
Should I be uploading the OVA file to the datastore instead and change my disk block to look like:
disk {
datastore = "datastore1"
template = "/path/to/ova/"
type = "thin"
}
Or am I just out of luck here? Also, the terraform provider for vsphere doesn't correctly receive the error from VCenter and just continues to poll even though the vm failed.
OVA contains streamOptimized disks. If you directly upload to the datastore vSphere doesn't recognize it as a VMDK for a VM.
You can use vmware-vdiskmanager tool to convert the streamOptimized disk to sparse disk.
vmware-vdiskmanager -r ova_extracted.vmdk -t 0 destination.vmdk

Configure Slick with Sql Server

I have a project that is currently using MySQL that I would like to migrate to SQL Server (running on Azure). I have tried a lot of combinations of configurations but always get the same generic error message:
Cannot connect to database [default]
Here is my latest configuration attempt:
slick.dbs.default.driver = "com.typesafe.slick.driver.ms.SQLServerDriver"
slick.dbs.default.db.driver = "com.microsoft.sqlserver.jdbc.SQLServerDriver"
slick.dbs.default.db.url = "jdbc:sqlserver://my_host.database.windows.net:1433;database=my_db"
slick.dbs.default.db.user = "username"
slick.dbs.default.db.password = "password"
slick.dbs.default.db.connectionTimeout="10 seconds"
I have the sqljdbc4.jar in my lib/ folder.
And have added the following to my build.sbt
libraryDependencies += "com.typesafe.slick" %% "slick-extensions" % "3.0.0"
resolvers += "Typesafe Releases" at "http://repo.typesafe.com/typesafe/maven-releases/"
Edit: I can connect from this machine using a GUI app, so the issue is not with any of the network settings.
Edit: 5/30/2017
After the release of Slick 3.2 the driver is now in the core suite, these are examples of Configs with 3.2
oracle = {
driver = "slick.jdbc.OracleProfile$"
db {
host = ${?ORACLE_HOST}
port = ${?ORACLE_PORT}
sid = ${?ORACLE_SID}
url = "jdbc:oracle:thin:#//"${oracle.db.host}":"${oracle.db.port}"/"${oracle.db.sid}
user = ${?ORACLE_USERNAME}
password = ${?ORACLE_PASSWORD}
}
}
sqlserver = {
driver = "slick.jdbc.SQLServerProfile$"
db {
host = ${?SQLSERVER_HOST}
port = ${?SQLSERVER_PORT}
databaseName = ${?SQLSERVER_DB_NAME}
url = "jdbc:sqlserver://"${sqlserver.db.host}":"${sqlserver.db.port}";databaseName="${sqlserver.db.databaseName}
user = ${?SQLSERVER_USERNAME}
password = ${?SQLSERVER_PASSWORD}
}
}
End Edit
I only have experience with the oracle config but I believe it is fairly similar. You are missing the critical $ at the end of the default driver. Also you will need to make sure your SBT project recognizes the lib
This first code snippet should be in application.conf or whatever file you are using for your Configuration
oracle = {
driver = "com.typesafe.slick.driver.oracle.OracleDriver$"
db {
host = ""
port = ""
sid = ""
url = "jdbc:oracle:thin:#//"${oracle.db.host}":"${oracle.db.port}"/"${oracle.db.sid}
user = ${?USERNAME}
password = ${?PASSWORD}
driver = oracle.jdbc.driver.OracleDriver
}
}
This second section is in my build.sbt . I put my oracle driver in the base folder in the /.lib, although their may be a better way.
unmanagedBase := baseDirectory.value / ".lib"
Finally to make sure the config is loading properly. Slick default seems to misbehave, so hopefully you get a right answer, rather than a what works for me answer. However utilizing my config above I can then load that using the last snippet. I found this in an example of a cake implementation and it has worked very well in multiple projects.
val dbConfig: DatabaseConfig[JdbcProfile] = DatabaseConfig.forConfig("oracle")
implicit val profile: JdbcProfile = dbConfig.driver
implicit val db: JdbcProfile#Backend#Database = dbConfig.db
This allows you to use the database, the driver for imports and will fail on compile if your configuration is wrong. Hope this helps.
edit : I finished and realized you were working with Azure so make sure that you can fully connect utilizing the same settings from the same machine utilizing a client of your choice. To make sure all firewall and user settings are correct and that the problem truly lies in your code and not in your system configuration.
edit2: Wanted to make sure I didn't give you bad advice since it was an Oracle Config so I set it up against and AWS SQL Server. I utilized the sqljdbc42.jar that is given by Microsoft with their jdbc install. Put that in the .lib and then I had a configuration like follows. As in the upper example you could instead use Environmental variables but this was just a quick proof of concept. Here is a Microsoft SQL Server Config I have now tested to confirm works.
sqlserver = {
driver = "com.typesafe.slick.driver.ms.SQLServerDriver$"
db {
driver = "com.microsoft.sqlserver.jdbc.SQLServerDriver"
host = ""
port = ""
databaseName = ""
url = "jdbc:sqlserver://"${sqlserver.db.host}":"${sqlserver.db.port}";databaseName="${sqlserver.db.databaseName}
user = ""
password = ""
}
}

the database file cannot be found. check the path to the database. data source in windows phone 7

i have created the data base in windows phone 7 and it works me fine
after rebuild the application it says that
The database file cannot be found. Check the path to the database. [ Data Source = \Applications\Data\6157CB94-31D3-4E6F-BFC3-78BE1549C10A\Data\IsolatedStore\amit.sdf ]
my code for db string is
` private const string Con_String = #"isostore:/amit.sdf";`
how to solve this pls give me any suggestion to solve this problem
Have you checked this sample How to create a basic local database app for Windows Phone?
they use this path for create the db
//Specify the connection string as a static, used in main page and app.xaml.
public static string DBConnectionString = "Data Source=isostore:/ToDo.sdf";
and also don't forget to check if the db exists
//Create the database if it does not exist.
using (ToDoDataContext db = new ToDoDataContext(ToDoDataContext.DBConnectionString))
{
if (db.DatabaseExists() == false)
{
//Create the database
db.CreateDatabase();
}
}

Resources