This is regarding Terraform on Azure. In my previous project I have used the legacy "azurerm_virtual_machine" resource + ARM Template to provision the "Microsoft.SqlVirtualMachine/SqlVirtualMachines" resource and have the data disks, luns configured.
That works pretty well.
In my current project, we are making use of the newer resource "azurerm_windows_virtual_machine" + "azurerm_mssql_virtual_machine" together to spin up the SQL VMs. However, it's been a dud so far.
Terraform docs example uses the legacy resource "azurerm_windows_virtual_machine".
Problems
Didn't find a way to describe the data disk and lun id in "azurerm_windows_virtual_machine" As a result
When I don't mention the storage_configuration block in the "azurerm_mssql_virtual_machine", Azure portal shows "Drive is not found in the volumes list." under the SQL Virtual Machine resource ( not the Virtual Machine resource) > Configuration section. I have attached a screenshot.
See error below :
If I try to mention data disk and lun in the storage_configuration block of the "azurerm_mssql_virtual_machine" , the provisioning fails with the error
creating Sql Virtual Machine (Sql Virtual Machine Name "ASQLVM" / Resource Group "a-resource-group"):
sqlvirtualmachine.SQLVirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=0 --
Original Error: Code="InvalidDefaultFilePath" Message="Invalid Default File Path"
Is there a good way to provision SQL Virtual Machines using the new "azurerm_windows_virtual_machine" ,+ "azurerm_mssql_virtual_machine" together ?
Check out the following code. I hope it answers your questions if you haven't already succeeded.
resource "azurerm_windows_virtual_machine" "vm" {
count = length(var.instances)
name = upper(element(var.instances, count.index))
location = azurerm_resource_group.resourcegroup[count.index].location
resource_group_name = azurerm_resource_group.resourcegroup[count.index].name
network_interface_ids = [azurerm_network_interface.nic[count.index].id]
size = var.instancesize
zone = var.instancezone
admin_username = var.vmadmin
admin_password = data.azurerm_key_vault_secret.vmadminpwd.value
enable_automatic_updates = "false"
patch_mode = "Manual"
provision_vm_agent = "true"
tags = var.tags
source_image_reference {
publisher = "MicrosoftSQLServer"
offer = "sql2019-ws2019"
sku = "enterprise"
version = "latest"
}
os_disk {
name = "${element(var.instances, count.index)}-osdisk"
caching = "ReadWrite"
storage_account_type = "StandardSSD_LRS"
disk_size_gb = 250
}
}
# add a data disk - we were going to iterate through a collection, but this is easier for now
resource "azurerm_managed_disk" "datadisk" {
count = length(var.instances)
name = "${azurerm_windows_virtual_machine.vm[count.index].name}-data-disk01"
location = azurerm_resource_group.resourcegroup[count.index].location
resource_group_name = azurerm_resource_group.resourcegroup[count.index].name
storage_account_type = "Premium_LRS"
zones = [var.instancezone]
create_option = "Empty"
disk_size_gb = 1000
tags = var.tags
}
resource "azurerm_virtual_machine_data_disk_attachment" "datadisk_attach" {
count = length(var.instances)
managed_disk_id = azurerm_managed_disk.datadisk[count.index].id
virtual_machine_id = azurerm_windows_virtual_machine.vm[count.index].id
lun = 1
caching = "ReadWrite"
}
# add a log disk - we were going to iterate through a collection, but this is easier for now
resource "azurerm_managed_disk" "logdisk" {
count = length(var.instances)
name = "${azurerm_windows_virtual_machine.vm[count.index].name}-log-disk01"
location = azurerm_resource_group.resourcegroup[count.index].location
resource_group_name = azurerm_resource_group.resourcegroup[count.index].name
storage_account_type = "Premium_LRS"
zones = [var.instancezone]
create_option = "Empty"
disk_size_gb = 500
tags = var.tags
}
resource "azurerm_virtual_machine_data_disk_attachment" "logdisk_attach" {
count = length(var.instances)
managed_disk_id = azurerm_managed_disk.logdisk[count.index].id
virtual_machine_id = azurerm_windows_virtual_machine.vm[count.index].id
lun = 2
caching = "ReadWrite"
}
# configure the SQL side of the deployment
resource "azurerm_mssql_virtual_machine" "sqlvm" {
count = length(var.instances)
virtual_machine_id = azurerm_windows_virtual_machine.vm[count.index].id
sql_license_type = "PAYG"
r_services_enabled = true
sql_connectivity_port = 1433
sql_connectivity_type = "PRIVATE"
sql_connectivity_update_username = var.sqladmin
sql_connectivity_update_password = data.azurerm_key_vault_secret.sqladminpwd.value
#The storage_configuration block supports the following:
storage_configuration {
disk_type = "NEW" # (Required) The type of disk configuration to apply to the SQL Server. Valid values include NEW, EXTEND, or ADD.
storage_workload_type = "OLTP" # (Required) The type of storage workload. Valid values include GENERAL, OLTP, or DW.
# The storage_settings block supports the following:
data_settings {
default_file_path = var.sqldatafilepath # (Required) The SQL Server default path
luns = [azurerm_virtual_machine_data_disk_attachment.datadisk_attach[count.index].lun]
}
log_settings {
default_file_path = var.sqllogfilepath # (Required) The SQL Server default path
luns = [azurerm_virtual_machine_data_disk_attachment.logdisk_attach[count.index].lun] # (Required) A list of Logical Unit Numbers for the disks.
}
# temp_db_settings {
# default_file_path = var.sqltempdbfilepath #- (Required) The SQL Server default path
# luns = [3] #- (Required) A list of Logical Unit Numbers for the disks.
# }
}
}
On the contrary to the answer above, the proposed code does not solve the problem.
The problem is:
There´s no clear format on how to pass the default path to the terraform sql resource.
resource "azurerm_mssql_virtual_machine" "sqlserver" {
virtual_machine_id = azurerm_windows_virtual_machine.win.id
sql_license_type = "PAYG"
r_services_enabled = true
auto_patching {
day_of_week = "Sunday"
maintenance_window_duration_in_minutes = 60
maintenance_window_starting_hour = 2
}
storage_configuration {
disk_type = "NEW"
storage_workload_type = "OLTP"
data_settings {
default_file_path = "D:\\Data"
luns = [0]
}
log_settings {
default_file_path = "E:\\log"
luns = [1]
}
temp_db_settings {
default_file_path = "F:\\bin"
luns = [2]
}
}
As you see here, I´m trying to define the correct path setting. But it´s not working.
The target is to attach the disks, and format them so that the SQL resource can take control of the path/disk.
You need use letter of temporary drive D: and omit E: because it's DVD drive.
You need to do it like this:
storage_configuration {
disk_type = "NEW"
storage_workload_type = "OLTP"
data_settings {
default_file_path = "F:\\Data"
luns = [0]
}
log_settings {
default_file_path = "G:\\Log"
luns = [1]
}
temp_db_settings {
default_file_path = "D:\\TempDb"
luns = []
}
Issue :
Someone has added a junk column in one of my table.I want to figure it out from the logs as when and from where this activity has been performed.
Please Help regarding this issue.
Make sure enable logging in postgresql.conf
1.log_destination = 'stderr' #log_destination = 'stderr,csvlog,syslog'
2.logging_collector = on #need restart
3.log_directory = 'pg_log'
4.log_file_name = 'postgresql-%Y-%m-%d_%H%M%S.log'
5.log_rotation_age = 1d
6.log_rotation_size = 10MB
7.log_min_error_statement = error
8.log_min_duration_statement = 5000 # -1 = disable ; 0 = ALL ; 5000 = 5sec
9.log_line_prefix = '|%m|%r|%d|%u|%e|'
10.log_statment = 'ddl' # 'none' | 'ddl' | 'mod' | 'all'
#prefer 'ddl' because the log output will be 'ddl' and 'query min duration'
If you don't enable it, make sure enable it now.
if you don't have log the last attempt is pg_xlogdump your xlog file under pg_xlog and look for DDL
I get "IndexError: list is out of range" when I input this code. Also, the retmax is set at 614 because that's the total number of results when I make the request. Is there a way to make the retmode equal to the number of results using a variable that changes depending on the search results?
#!/usr/bin/env python
from Bio import Entrez
Entrez.email = "something#gmail.com"
handle1 = Entrez.esearch(db = "nucleotide", term = "dengue full genome", retmax = 614)
record = Entrez.read(handle1)
IdNums = [int(i) for i in record['IdList']]
while i >= 0 and i <= len(IdNums):
handle2 = Entrez.esearch(db = "nucleotide", id = IdNums[i], type = "gb", retmode = "text")
record = Entrez.read(handle2)
print(record)
i += 1
Rather than using a while loop, you can use a for loop...
from Bio import Entrez
Entrez.email = 'youremailaddress'
handle1 = Entrez.esearch(db = 'nucleotide', term = 'dengue full genome', retmax = 614)
record = Entrez.read(handle1)
IdNums = [int(i) for i in record['IdList']]
for i in IdNums:
print(i)
handle2 = Entrez.esearch(db = 'nucleotide', term = 'dengue full genome', id = i, rettype = 'gb', retmode = 'text')
record = Entrez.read(handle2)
print(record)
I ran it on my computer and it seems to work. The for loop solved the out of bounds, and adding the term to handle2 solved the calling error.
I am trying to update customer info. I use this code to load 5 fields:
cust_cn.Open()
Dim cust_da As New SqlDataAdapter("SELECT * FROM [customers] where [custID]=" & txtCustPhone.Text, cust_cn)
cust_da.Fill(cust_datatable)
txtCustPhone.Text = cust_datatable.Rows(0).Item("custID")
txtCustFirstName.Text = cust_datatable.Rows(0).Item("first")
txtCustLastName.Text = cust_datatable.Rows(0).Item("last")
txtCustAddress.Text = cust_datatable.Rows(0).Item("address")
txtCustZip.Text = cust_datatable.Rows(0).Item("zip")
and this works fine. When I try to modify one of the fields (change zip code on an existing customer)
with this code:
If cust_datatable.Rows.Count <> 0 Then
cust_datatable.Rows(0).Item("custID") = txtCustPhone.Text
cust_datatable.Rows(0).Item("first") = txtCustFirstName.Text
cust_datatable.Rows(0).Item("last") = txtCustLastName
cust_datatable.Rows(0).Item("address") = txtCustAddress.Text
cust_datatable.Rows(0).Item("zip") = txtCustZip.Text
'cust_datatable.Rows(custrecord)("custID") = txtCustPhone.Text
'cust_datatable.Rows(custrecord)("first") = txtCustFirstName.Text
'cust_datatable.Rows(custrecord)("last") = txtCustLastName.Text
'cust_datatable.Rows(custrecord)("address") = txtCustAddress.Text
'cust_datatable.Rows(custrecord)("zip") = txtCustZip.Text
cust_DA.Update(cust_datatable)
End If
I get the error: "String or binary data would be truncated"
I originally tried to update using the commented section, but it was only modifying the first record in the database.
Any thoughts?
I'm building a web app with django. I use postgresql for the db. The app code is getting really messy(my begginer skills being a big factor) and slow, even when I run the app locally.
This is an excerpt of my models.py file:
REPEATS_CHOICES = (
(NEVER, 'Never'),
(DAILY, 'Daily'),
(WEEKLY, 'Weekly'),
(MONTHLY, 'Monthly'),
...some more...
)
class Transaction(models.Model):
name = models.CharField(max_length=30)
type = models.IntegerField(max_length=1, choices=TYPE_CHOICES) # 0 = 'Income' , 1 = 'Expense'
amount = models.DecimalField(max_digits=12, decimal_places=2)
date = models.DateField(default=date.today)
frequency = models.IntegerField(max_length=2, choices=REPEATS_CHOICES)
ends = models.DateField(blank=True, null=True)
active = models.BooleanField(default=True)
category = models.ForeignKey(Category, related_name='transactions', blank=True, null=True)
account = models.ForeignKey(Account, related_name='transactions')
The problem is with date, frequency and ends. With this info I can know all the dates in which transactions occurs and use it to fill a cashflow table. Doing things this way involves creating a lot of structures(dictionaries, lists and tuples) and iterating them a lot. Maybe there is a very simple way of solving this with the actual schema, but I couldn't realize how.
I think that the app would be easier to code if, at the creation of a transaction, I could save all the dates in the db. I don't know if it's possible or if it's a good idea.
I'm reading a book about google app engine and the datastore's multivalued properties. What do you think about this for solving my problem?.
Edit: I didn't know about the PickleField. I'm now reading about it, maybe I could use it to store all the transaction's datetime objects.
Edit2: This is an excerpt of my cashflow2 view(sorry for the horrible code):
def cashflow2(request, account_name="Initial"):
if account_name == "Initial":
uri = "/cashflow/new_account"
return HttpResponseRedirect(uri)
month_info = {}
cat_info = {}
m_y_list = [] # [(month,year),]
trans = []
min, max = [] , []
account = Account.objects.get(name=account_name, user=request.user)
categories = account.categories.all()
for year in range(2006,2017):
for month in range(1,13):
month_info[(month, year)] = [0, 0, 0]
for cat in categories:
cat_info[(cat, month, year)] = 0
previous_months = 1 # previous months from actual
next_months = 5
dates_list = month_year_list(previous_month, next_months) # Returns [(month,year)] from the requested range
m_y_list = [(date.month, date.year) for date in month_year_list(1,5)]
min, max = dates_list[0], dates_list[-1]
INCOME = 0
EXPENSE = 1
ONHAND = 2
transacs_in_dates = []
txs = account.transactions.order_by('date')
for tx in txs:
monthyear = ()
monthyear = (tx.date.month, tx.date.year)
if tx.frequency == 0:
if tx.type == 0:
month_info[monthyear][INCOME] += tx.amount
if tx.category:
cat_info[(tx.category, monthyear[0], monthyear[1])] += tx.amount
else:
month_info[monthyear][EXPENSE] += tx.amount
if tx.category:
cat_info[(tx.category, monthyear[0], monthyear[1])] += tx.amount
if monthyear in lista_m_a:
if tx not in transacs_in_dates:
transacs_in_dates.append(tx)
elif tx.frequency == 4: # frequency = 'Monthly'
months_dif = relativedelta.relativedelta(tx.ends, tx.date).months
if tx.ends.day < tx.date.day:
months_dif += 1
years_dif = relativedelta.relativedelta(tx.ends, tx.date).years
dif = months_dif + (years_dif*12)
dates_range = dif + 1
for i in range(dates_range):
dt = tx.date+relativedelta.relativedelta(months=+i)
if (dt.month, dt.year) in m_y_list:
if tx not in transacs_in_dates:
transacs_in_dates.append(tx)
if tx.type == 0:
month_info[(fch.month,fch.year)][INCOME] += tx.amount
if tx.category:
cat_info[(tx.category, fch.month, fch.year)] += tx.amount
else:
month_info[(fch.month,fch.year)][EXPENSE] += tx.amount
if tx.category:
cat_info[(tx.category, fch.month, fch.year)] += tx.amount
import operator
thelist = []
thelist = sorted((my + tuple(v) for my, v in month_info.iteritems()),
key = operator.itemgetter(1, 0))
thelistlist = []
for atuple in thelist:
thelistlist.append(list(atuple))
for i in range(len(thelistlist)):
if i != 0:
thelistlist[i][4] = thelistlist[i-1][2] - thelistlist[i-1][3] + thelistlist[i-1][4]
list = []
for el in thelistlist:
if (el[0],el[1]) in lista_m_a:
list.append(el)
transactions = account.transactions.all()
cats_in_dates_income = []
cats_in_dates_expense = []
for t in transacs_in_dates:
if t.category and t.type == 0:
if t.category not in cats_in_dates_income:
cats_in_dates_income.append(t.category)
elif t.category and t.type == 1:
if t.category not in cats_in_dates_expense:
cats_in_dates_expense.append(t.category)
cat_infos = []
for k, v in cat_info.items():
cat_infos.append((k[0], k[1], k[2], v))
Depends on how relevant App Engine is here. P.S. If you'd like to store pickled objects as well as JSON objects in the Google Datastore, check out these two code snippets:
http://kovshenin.com/archives/app-engine-json-objects-google-datastore/
http://kovshenin.com/archives/app-engine-python-objects-in-the-google-datastore/
Also note that the Google Datastore is a non-relational database, so you might have other trouble refactoring your code to switch to that.
Cheers and good luck!