Terraform Error: vsphere provider doesn’t support resource - vsphere

I have a small issue, my terraform code is saying the vsphere provider does not support a vsphere_instance resource.
When I run terraform plan, I get:
1 error(s) occurred:
*vsphere_instance.node1: Provider doesn’t support resource: vsphere_instance
Terraform template:
provider "vsphere" {
user = "andm"
password = "Welcome123!"
vsphere_server = "vcenter1.domain.com"
allow_unverified_ssl = true
}
resource "vsphere_instance" "node1" {
name = "node1.domain.com"
vcpu = 4
memory = 4096
time_zone = "040"
domain = "hosting.domain.com"
dns_servers = ["8.8.8.8"]
disk {
datastore = "WS006_LUN_197"
vmdk = "templates_01/AV_W2K8_Tmlate/AV_W2K8_Template.vmdk"
type = "thin"
}
network_interface {
ipv4_address = "192.168.0.1"
ipv4_gateway = "192.168.1.1"
ipv4_prefix_length = "24"
}
}

Can you change the resource name from vspher_instance to vsphere_virtual_machine
This should fix your issue.
https://www.terraform.io/docs/providers/vsphere/index.html
VMWARE VSPHERE PROVIDER
RESOURCES
vsphere_virtual_machine
vsphere_folder
vsphere_file
vsphere_virtual_disk

Related

How to use terraform to enable Managed private endpoint on datafactory azure sql database linked service

I am trying to use terraform to create adf linked services however the terraform resource doesn't give the option to select an already existing managed private endpoint for the linked service to communicate over but when creating from the portal, this is possible. bellow is my code
resource "azurerm_data_factory" "process-adf" {
resource_group_name = module.resourcegroup.resource_group.name
location = module.resourcegroup.resource_group.location
name = "adf"
managed_virtual_network_enabled = true
public_network_enabled = false
tags = var.tags
identity {
type = "SystemAssigned"
}
}
resource "azurerm_data_factory_linked_service_azure_sql_database" "process-mssql-adf" {
name = "mssql-adf"
data_factory_id = azurerm_data_factory.process-adf.id
integration_runtime_name = azurerm_data_factory_integration_runtime_azure.adf.id
connection_string = "data source=servername;initial catalog=databasename;user id=admin;Password=password;integrated security=True;encrypt=True;connection timeout=30"
}
resource "azurerm_data_factory_managed_private_endpoint" "adf-msssql-pe" {
name = "adf"
data_factory_id = azurerm_data_factory.process-adf.id
target_resource_id = azurerm_mssql_server.process-control.id
subresource_name = "sqlServer"
}
resource "azurerm_data_factory_integration_runtime_azure" "adf" {
name = "adf"
data_factory_id = azurerm_data_factory.process-adf.id
location = module.resourcegroup.resource_group.location
virtual_network_enabled = true
}
how do i point the resource azurerm_data_factory_linked_service_azure_sql_database to the resource azurerm_data_factory_managed_private_endpoint ?

Apache Zeppelin configured for OIDC redirects to http://localhost:8081/null

I've tried with both Apache Zeppelin 0.8 and 0.9 + pac4j and the problem is the same. When visiting the app root at http://localhost:8081/ I get redirected to http://localhost:8081/null. log4j does not output anything that may help.
This is my shiro.ini file:
[main]
sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager
securityManager.sessionManager = $sessionManager
securityManager.sessionManager.globalSessionTimeout = 86400000
oidcConfig = org.pac4j.oidc.config.OidcConfiguration
oidcConfig.discoveryURI = http://localhost:8080/auth/realms/Test/.well-known/openid-configuration
oidcConfig.clientId = Zeppelin
oidcConfig.secret = e15b220e-9b3c-4997-9a76-81086e3e1ca3
oidcConfig.clientAuthenticationMethodAsString = client_secret_basic
oidcClient = org.pac4j.oidc.client.OidcClient
oidcClient.configuration = $oidcConfig
clients = org.pac4j.core.client.Clients
clients.callbackUrl = http://localhost:8081/api/callback
clients.clients = $oidcClient
requireRoleAdmin = org.pac4j.core.authorization.authorizer.RequireAnyRoleAuthorizer
config = org.pac4j.core.config.Config
config.clients = $clients
pac4jRealm = io.buji.pac4j.realm.Pac4jRealm
pac4jSubjectFactory = io.buji.pac4j.subject.Pac4jSubjectFactory
securityManager.subjectFactory = $pac4jSubjectFactory
oidcSecurityFilter = io.buji.pac4j.filter.SecurityFilter
oidcSecurityFilter.config = $config
oidcSecurityFilter.clients = oidcClient
callbackFilter = io.buji.pac4j.filter.CallbackFilter
callbackFilter.defaultUrl = http://localhost:8081
callbackFilter.config = $config
[urls]
/api/version = anon
/api/callback = callbackFilter
/** = oidcSecurityFilter
Add resolver will help redirect to the right oidc path
ajaxRequestResolver = org.pac4j.core.http.ajax.DefaultAjaxRequestResolver
ajaxRequestResolver.addRedirectionUrlAsHeader = true
oidcClient.ajaxRequestResolver = $ajaxRequestResolver
I believe this is a bug in one of the libraries. You could try to use specific versions of the underlying dependencies. This is the combination that helped me to solve this null problem:
https://repo1.maven.org/maven2/org/apache/commons/commons-collections4/4.4/commons-collections4-4.4.jar
https://repo1.maven.org/maven2/com/nimbusds/lang-tag/1.5/lang-tag-1.5.jar
https://repo1.maven.org/maven2/net/minidev/json-smart/2.4.7/json-smart-2.4.7.jar
https://repo1.maven.org/maven2/com/nimbusds/oauth2-oidc-sdk/9.9/oauth2-oidc-sdk-9.9.jar
https://repo1.maven.org/maven2/com/nimbusds/content-type/2.1/content-type-2.1.jar
https://repo1.maven.org/maven2/javax/mail/mail/1.4.7/mail-1.4.7.jar
https://repo1.maven.org/maven2/io/buji/buji-pac4j/5.0.1/buji-pac4j-5.0.1.jar
https://repo1.maven.org/maven2/org/pac4j/pac4j-core/4.0.3/pac4j-core-4.0.3.jar
https://repo1.maven.org/maven2/org/pac4j/pac4j-oidc/4.0.3/pac4j-oidc-4.0.3.jar

Can't execute a transaction block while working with a database. Kotlin, Exposed

I am trying to send transactions to the Postgre database (the table exists) using the Exposed framework for Kotlin, but an error occurs that does not allow me to do this. The error appears on the line SchemaUtils.create(tableTest)
Source code:
import org.jetbrains.exposed.dao.id.IntIdTable
import org.jetbrains.exposed.sql.*
import org.jetbrains.exposed.sql.transactions.transaction
fun main(args: Array<String>) {
val db = Database.connect("jdbc:postgresql://localhost:5432/testBase", driver = "org.postgresql.Driver", user = "user", password = "123")
println("Database name: ${db.name}")
transaction {
addLogger(StdOutSqlLogger)
SchemaUtils.create(tableTest)
println("People: ${tableTest.selectAll()}")
}
}
object tableTest: Table() {
val id = integer("id")
val name = text("name")
val surname = text("surname")
val height = integer("height")
val phone = text("phone")
override val primaryKey = PrimaryKey(id)
}
The error:
Exception in thread "main" java.lang.ExceptionInInitializerError
at MainKt$main$1.invoke(main.kt:12)
at MainKt$main$1.invoke(main.kt)
at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManagerKt$inTopLevelTransaction$1.invoke(ThreadLocalTransactionManager.kt:170)
at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManagerKt$inTopLevelTransaction$2.invoke(ThreadLocalTransactionManager.kt:211)
at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManagerKt.keepAndRestoreTransactionRefAfterRun(ThreadLocalTransactionManager.kt:219)
at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManagerKt.inTopLevelTransaction(ThreadLocalTransactionManager.kt:210)
at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManagerKt$transaction$1.invoke(ThreadLocalTransactionManager.kt:148)
at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManagerKt.keepAndRestoreTransactionRefAfterRun(ThreadLocalTransactionManager.kt:219)
at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManagerKt.transaction(ThreadLocalTransactionManager.kt:120)
at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManagerKt.transaction(ThreadLocalTransactionManager.kt:118)
at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManagerKt.transaction$default(ThreadLocalTransactionManager.kt:117)
at MainKt.main(main.kt:10)
Caused by: java.lang.IllegalStateException: javaClass.`package` must not be null
at org.jetbrains.exposed.sql.Table.<init>(Table.kt:306)
at org.jetbrains.exposed.sql.Table.<init>(Table.kt:303)
at tableTest.<init>(main.kt:30)
at tableTest.<clinit>(main.kt:30)
... 12 more
build.gradle.kts:
import org.jetbrains.kotlin.gradle.tasks.KotlinCompile
plugins {
kotlin("jvm") version "1.4.0"
application
}
group = "me.amd"
version = "1.0-SNAPSHOT"
repositories {
mavenCentral()
jcenter()
}
dependencies {
testImplementation(kotlin("test-junit"))
implementation("org.jetbrains.exposed", "exposed-core", "0.26.2")
implementation("org.jetbrains.exposed", "exposed-dao", "0.26.2")
implementation("org.jetbrains.exposed", "exposed-jdbc", "0.26.2")
implementation("org.postgresql:postgresql:42.2.16")
implementation("org.slf4j", "slf4j-api", "1.7.25")
implementation("org.slf4j", "slf4j-simple", "1.7.25")
implementation("org.xerial:sqlite-jdbc:3.30.1")
}
tasks.withType<KotlinCompile>() {
kotlinOptions.jvmTarget = "1.8"
}
application {
mainClassName = "MainKt"
}
Tried doing like this:
transaction {
addLogger(StdOutSqlLogger)
val schema = Schema("tableTest", authorization = "postgres", password = "123456")
SchemaUtils.setSchema(schema)
println("People: ${tableTest.selectAll()}")
}
but the error has moved to the line println("People: ${tableTest.selectAll()}")
I tried to send queries to SQLite — everything is the same
How to fix this error and still send a request to the database? I hope for your help!
Add a package statement above your import statements. Furthermore, add your main method in a class.

Create database schema with terraform

I created RDS instance using aws_db_instance (main.tf):
resource "aws_db_instance" "default" {
identifier = "${module.config.database["db_inst_name"]}"
allocated_storage = 20
storage_type = "gp2"
engine = "mysql"
engine_version = "5.7"
instance_class = "db.t3.micro"
name = "${module.config.database["db_name_prefix"]}${terraform.workspace}"
username = "${module.config.database["db_username"]}"
password = "${module.config.database["db_password"]}"
parameter_group_name = "default.mysql5.7"
skip_final_snapshot = true
}
Can I also create database schemas from file schema.sql with terraform apply?
$ tree -L 1
.
├── main.tf
└── schema.sql
You can use a provisioner (https://www.terraform.io/docs/provisioners/index.html) for that:
resource "aws_db_instance" "default" {
identifier = module.config.database["db_inst_name"]
allocated_storage = 20
storage_type = "gp2"
engine = "mysql"
engine_version = "5.7"
instance_class = "db.t3.micro"
name = "${module.config.database["db_name_prefix"]}${terraform.workspace}"
username = module.config.database["db_username"]
password = module.config.database["db_password"]
parameter_group_name = "default.mysql5.7"
skip_final_snapshot = true
provisioner "local-exec" {
command = "mysql --host=${self.address} --port=${self.port} --user=${self.username} --password=${self.password} < ./schema.sql"
}
}
#Apply scheme by using bastion host
resource "aws_db_instance" "default_bastion" {
identifier = module.config.database["db_inst_name"]
allocated_storage = 20
storage_type = "gp2"
engine = "mysql"
engine_version = "5.7"
instance_class = "db.t3.micro"
name = "${module.config.database["db_name_prefix"]}${terraform.workspace}"
username = module.config.database["db_username"]
password = module.config.database["db_password"]
parameter_group_name = "default.mysql5.7"
skip_final_snapshot = true
provisioner "file" {
connection {
user = "ec2-user"
host = "bastion.example.com"
private_key = file("~/.ssh/ec2_cert.pem")
}
source = "./schema.sql"
destination = "~"
}
provisioner "remote-exec" {
connection {
user = "ec2-user"
host = "bastion.example.com"
private_key = file("~/.ssh/ec2_cert.pem")
}
command = "mysql --host=${self.address} --port=${self.port} --user=${self.username} --password=${self.password} < ~/schema.sql"
}
}
mysql client needs to be installed on your device.
If you don't have direct access to your DB, there is also a remote-exec provisioner, where you can use a bastion host (transfer file to remote place with file provisioner first).
If your schema is not to complex, you could also use the MySQL provider of terraform:
https://www.terraform.io/docs/providers/mysql/index.html

How can i establish rpc properties with the datasource type DB in Corda community edition?

To establish an RPC connection in the community edition we need to specify the rpc username, password and permissions but when we are integrating external database like MySQL and change the datasource type from INMEMORY to "DB" it does not allows to give user properties.
these are the settings I am using in my node.conf
security = {
authService = {
dataSource = {
type = "DB"
passwordEncryption = "SHIRO_1_CRYPT"
connection = {
jdbcUrl = "jdbc:mysql://localhost:3306"
username = "root"
password = "password"
driverClassName = "com.mysql.jdbc.Driver"
}
}
options = {
cache = {
expireAfterSecs = 120
maxEntries = 10000
}
}
}
Maybe I didn't understand your question, but database setup in node.conf is separate from RPC user setup in node.conf:
Database (PostGres in my case)
extraConfig = [
'dataSourceProperties.dataSourceClassName' : 'org.postgresql.ds.PGSimpleDataSource',
'dataSourceProperties.dataSource.url' : 'jdbc:postgresql://localhost:5432/postgres',
'dataSourceProperties.dataSource.user' : 'db_user',
'dataSourceProperties.dataSource.password' : 'db_user_password',
'database.transactionIsolationLevel' : 'READ_COMMITTED',
'database.initialiseSchema' : 'true'
]
RPC User
rpcUsers = [[ user: "rpc_user", "password": "rpc_user_password", "permissions": ["ALL"]]]
Ok, I'm adding my node's node.config (it's part of Corda TestNet, and it's deployed on Google Cloud):
baseDirectory = "."
compatibilityZoneURL = "https://netmap.testnet.r3.com"
emailAddress = "xxx"
jarDirs = [ "plugins", "cordapps" ]
sshd { port = 2222 }
myLegalName = "OU=xxx, O=TESTNET_xxx, L=London, C=GB"
keyStorePassword = "xxx"
trustStorePassword = "xxx"
crlCheckSoftFail = true
database = {
transactionIsolationLevel = "READ_COMMITTED"
initialiseSchema = "true"
}
dataSourceProperties {
dataSourceClassName = "org.postgresql.ds.PGSimpleDataSource"
dataSource.url = "jdbc:postgresql://xxx:xxx/postgres"
dataSource.user = xxx
dataSource.password = xxx
}
p2pAddress = "xxx:xxx"
rpcSettings {
useSsl = false
standAloneBroker = false
address = "0.0.0.0:xxx"
adminAddress = "0.0.0.0:xxx"
}
rpcUsers = [
{ username=cordazoneservice, password=xxx, permissions=[ ALL ] }
]
devMode = false
cordappSignerKeyFingerprintBlacklist = []
useTestClock = false

Resources