Snowflake Scala Connector(snowpark) - database

I am new to scala and trying to create a snowflake connector in scala using snowflake snowpark scala library.
Here is my simple code
package com.abc.commons.rest.snowflake
import com.snowflake.snowpark._
import com.snowflake.snowpark.functions._
object ScalaConnector {
def main(args: Array[String]): Unit = {
// Replace the <placeholders> below.
val configs = Map (
"URL" -> "https://xxx.snowflakecomputing.com:443",
"USER" -> "xxx",
"PASSWORD" -> "xxx",
"ROLE" -> "xxx",
"WAREHOUSE" -> "xxx",
"DB" -> "xxx",
"SCHEMA" -> "xxx"
)
val session = Session.builder.configs(configs).create
session.sql("show tables").show()
session.close();
}
}
I have provided correct credentials and then on running the above code in IntelliJ I am getting below errors:
Exception in thread "main" java.lang.ExceptionInInitializerError
at com.abc.commons.rest.snowflake.ScalaConnector$.main(ScalaConnector.scala:17)
at com.abc.commons.rest.snowflake.ScalaConnector.main(ScalaConnector.scala)
Caused by: java.lang.NullPointerException
at com.snowflake.snowpark.Session$.getActiveSession(Session.scala:1204)
at com.snowflake.snowpark.SnowparkClientException.<init>(SnowparkClientException.scala:15)
at com.snowflake.snowpark.internal.ErrorMessage$.createException(ErrorMessage.scala:380)
at com.snowflake.snowpark.internal.ErrorMessage$.MISC_SCALA_VERSION_NOT_SUPPORTED(ErrorMessage.scala:340)
at com.snowflake.snowpark.internal.Utils$.checkScalaVersionCompatibility(Utils.scala:243)
at com.snowflake.snowpark.internal.Utils$.checkScalaVersionCompatibility(Utils.scala:233)
at com.snowflake.snowpark.Session$.<init>(Session.scala:1129)
at com.snowflake.snowpark.Session$.<clinit>(Session.scala)
... 2 more
libraryDependencies I use in build.sbt
"com.snowflake" % "snowpark" % "1.4.0"
Can anyone point out the issue in my code?

There's no issue with the code, you're just using an unsupported Scala version. From your stacktrace:
MISC_SCALA_VERSION_NOT_SUPPORTED
We only support Scala 2.12 as explained here.

Related

Spark - Reading from SQL Server using com.microsoft.azure

I am trying to read from a table using com.microsoft.azure. Below is the code snippet
import org.apache.log4j.{Level, Logger}
import org.apache.spark.sql.SparkSession
import com.microsoft.azure.sqldb.spark.config.Config
import com.microsoft.azure.sqldb.spark.connect._
import com.microsoft.azure.sqldb.spark.query._
import org.apache.spark.sql.functions.to_date
val spark = SparkSession.builder().master("local[*]").appName("DbApp").getOrCreate()
Class.forName("com.microsoft.sqlserver.jdbc.SQLServerDriver")
val config = Config(Map(
"url" -> "jdbc:sqlserver://localhost:1433",
"databaseName" -> "Student",
"dbTable" -> "dbo.MemberDetail",
"authentication" -> "SqlPassword",
"user" -> "test",
"password" -> "****"
))
val df = spark.sqlContext.read.sqlDB(config)
println("Total rows: " + df.count)
However I am getting below error
Exception in thread "main" java.lang.NoClassDefFoundError: scala/Product$class
at com.microsoft.azure.sqldb.spark.config.SqlDBConfigBuilder.<init>(SqlDBConfigBuilder.scala:31)
at com.microsoft.azure.sqldb.spark.config.Config$.apply(Config.scala:254)
at com.microsoft.azure.sqldb.spark.config.Config$.apply(Config.scala:235)
at DbApp$.main(DbApp.scala:55)
at DbApp.main(DbApp.scala)
MSSQL JDBC Version: mssql-jdbc-7.2.2.jre8
azure-sqldb-spark version: 1.0.2
Could anyone kindly guide me what am I doing wrong.?
The class doesn't seem to be set in your config nor specified anywhere else. Class.forName just validates presence of the JDBC driver. The driver is also for microsoft.sqlserver, which is different library.
Consider using this:
import com.microsoft.sqlserver.jdbc.SQLServerDriver
import java.util.Properties
val jdbcHostname = "localhost"
val jdbcPort = 1433
val jdbcDatabase = "Student"
val jdbcTable = "dbo.MemberDetail"
val MyDBUrl = s"jdbc:sqlserver://${jdbcHostname}:${jdbcPort};database=${jdbcDatabase}"
val MyDBProperties = new Properties()
MyDBProperties.put("user", "test")
MyDBProperties.put("password", "****")
MyDBProperties.setProperty("Driver", "com.microsoft.sqlserver.jdbc.SQLServerDriver")
val df = spark.read.jdbc(MyDBUrl, jdbcTable, MyDBProperties)
This approach was most stable in my environment (using Databricks and Azure SQL DB).
Related knowledgebase article available here.
Since you are using azure-sqldb-spark to connect to SQL server.
All connection properties in Microsoft JDBC Driver for SQL Server are supported in this connector. Add connection properties as fields in the com.microsoft.azure.sqldb.spark.config.Config object.
You don't need to create the jdbc driver Class.forName("com.microsoft.sqlserver.jdbc.SQLServerDriver") again.
Your cold should be like this:
import com.microsoft.azure.sqldb.spark.config.Config
import com.microsoft.azure.sqldb.spark.connect._
val config = Config(Map(
"url" -> "locaohost",
"databaseName" -> "MyDatabase",
"dbTable" -> "dbo.Clients",
"user" -> "username",
"password" -> "*********",
"connectTimeout" -> "5", //seconds
"queryTimeout" -> "5" //seconds
))
val collection = sqlContext.read.sqlDB(config)
collection.show()
Please ref:
Connect Spark to SQL DB using the connector
azure-sqldb-spark
Hope this helps.
This issue is due to version (versions are mentioned in the question itself) conflict between com.microsoft.azure.sqldb and com.microsoft.jdbc driver, after downloading com.microsoft.azure.sqldb with all its dependencies from below link it worked.
Note: com.microsoft.azure.sqldb works on Java 8, I downgraded my java runtime version.
Click here to com.microsoft.azure.sqldb with all dependencies

Stuck at: Could not find a suitable table factory

While playing around with Flink, I have been trying to upsert data into Elasticsearch. I'm having this error on my STDOUT:
Caused by: org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a suitable table factory for 'org.apache.flink.table.factories.TableSinkFactory' in
the classpath.
Reason: Required context properties mismatch.
The following properties are requested:
connector.hosts=http://elasticsearch-elasticsearch-coordinating-only.default.svc.cluster.local:9200
connector.index=transfers-sum
connector.key-null-literal=n/a
connector.property-version=1
connector.type=elasticsearch
connector.version=6
format.json-schema={ \"curr_careUnit\": {\"type\": \"text\"}, \"sum\": {\"type\": \"float\"} }
format.property-version=1
format.type=json
schema.0.data-type=VARCHAR(2147483647)
schema.0.name=curr_careUnit
schema.1.data-type=FLOAT
schema.1.name=sum
update-mode=upsert
The following factories have been considered:
org.apache.flink.streaming.connectors.kafka.Kafka09TableSourceSinkFactory
org.apache.flink.table.sinks.CsvBatchTableSinkFactory
org.apache.flink.table.sinks.CsvAppendTableSinkFactory
at org.apache.flink.table.factories.TableFactoryService.filterByContext(TableFactoryService.java:322)
...
Here is what I have in my scala Flink code:
def main(args: Array[String]) {
// Create streaming execution environment
val env = StreamExecutionEnvironment.getExecutionEnvironment
env.setParallelism(1)
// Set properties per KafkaConsumer API
val properties = new Properties()
properties.setProperty("bootstrap.servers", "kafka.kafka:9092")
properties.setProperty("group.id", "test")
// Add Kafka source to environment
val myKConsumer = new FlinkKafkaConsumer010[String]("raw.data4", new SimpleStringSchema(), properties)
// Read from beginning of topic
myKConsumer.setStartFromEarliest()
val streamSource = env
.addSource(myKConsumer)
// Transform CSV (with a header row per Kafka event into a Transfers object
val streamTransfers = streamSource.map(new TransfersMapper())
// create a TableEnvironment
val tEnv = StreamTableEnvironment.create(env)
// register a Table
val tblTransfers: Table = tEnv.fromDataStream(streamTransfers)
tEnv.createTemporaryView("transfers", tblTransfers)
tEnv.connect(
new Elasticsearch()
.version("6")
.host("elasticsearch-elasticsearch-coordinating-only.default.svc.cluster.local", 9200, "http")
.index("transfers-sum")
.keyNullLiteral("n/a")
.withFormat(new Json().jsonSchema("{ \"curr_careUnit\": {\"type\": \"text\"}, \"sum\": {\"type\": \"float\"} }"))
.withSchema(new Schema()
.field("curr_careUnit", DataTypes.STRING())
.field("sum", DataTypes.FLOAT())
)
.inUpsertMode()
.createTemporaryTable("transfersSum")
val result = tEnv.sqlQuery(
"""
|SELECT curr_careUnit, sum(los)
|FROM transfers
|GROUP BY curr_careUnit
|""".stripMargin)
result.insertInto("transfersSum")
env.execute("Flink Streaming Demo Dump to Elasticsearch")
}
}
I am creating a fat jar and uploading it to my remote flink instance. Here is my build.gradle dependencies:
compile 'org.scala-lang:scala-library:2.11.12'
compile 'org.apache.flink:flink-scala_2.11:1.10.0'
compile 'org.apache.flink:flink-streaming-scala_2.11:1.10.0'
compile 'org.apache.flink:flink-connector-kafka-0.10_2.11:1.10.0'
compile 'org.apache.flink:flink-table-api-scala-bridge_2.11:1.10.0'
compile 'org.apache.flink:flink-connector-elasticsearch6_2.11:1.10.0'
compile 'org.apache.flink:flink-json:1.10.0'
compile 'com.fasterxml.jackson.core:jackson-core:2.10.1'
compile 'com.fasterxml.jackson.module:jackson-module-scala_2.11:2.10.1'
compile 'org.json4s:json4s-jackson_2.11:3.7.0-M1'
Here is how the farJar command is built for gradle:
jar {
from {
(configurations.compile).collect {
it.isDirectory() ? it : zipTree(it)
}
}
manifest {
attributes("Main-Class": "main" )
}
}
task fatJar(type: Jar) {
zip64 true
manifest {
attributes 'Main-Class': "flinkNamePull.Demo"
}
baseName = "${rootProject.name}"
from { configurations.compile.collect { it.isDirectory() ? it : zipTree(it) } }
with jar
}
Could anybody please help me to see what I am missing? I'm fairly new to Flink and data streaming in general. Hehe
Thank you in advance!
Is the list in The following factories have been considered: complete? Does it contain Elasticsearch6UpsertTableSinkFactory? If not as far as I can tell there is a problem with the service discovery dependencies.
How do you submit your job? Can you check if you have a file META-INF/services/org.apache.flink.table.factories.TableFactory in the uber jar with an entry for Elasticsearch6UpsertTableSinkFactory?
When using maven you have to add a transformer to properly merge service files:
<!-- The service transformer is needed to merge META-INF/services files -->
<transformer implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer"/>
I don't know how do you do it in gradle.
EDIT:
Thanks to Arvid Heise
In gradle when using shadowJar plugin you can merge service files via:
// Merging Service Files
shadowJar {
mergeServiceFiles()
}
You should use the shadow plugin to create the fat jar instead of doing it manually.
In particular, you want to merge service descriptors.

How correctly connect to Oracle 12g database in Play Framework?

I am new in Play Framework (Scala) and need some advise.
I use Scala 2.12 and Play Framework 2.6.20. I need to use several databases in my project. Right now I connected MySQL database as it says in documentation. How correctly connect project to remote Oracle 12g database?
application.conf:
db {
mysql.driver = com.mysql.cj.jdbc.Driver
mysql.url = "jdbc:mysql://host:port/database?characterEncoding=UTF-8"
mysql.username = "username"
mysql.password = "password"
}
First of all to lib folder I put ojdbc8.jar file from oracle website.
Then add libraryDependencies += "com.oracle" % "ojdbc8" % "12.1.0.1" code to sbt.build file. Finally I wrote settings to aplication.conf file.
After that step I notice error in terminal:
[error] (*:update) sbt.ResolveException: unresolved dependency: com.oracle#ojdbc8;12.1.0.1: not found
[error] Total time: 6 s, completed 10.11.2018 16:48:30
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256M; support was removed in 8.0
EDIT:
application.conf:
db {
mysql.driver = com.mysql.cj.jdbc.Driver
mysql.url = "jdbc:mysql://#host:#port/#database?characterEncoding=UTF-8"
mysql.username = "#username"
mysql.password = "#password"
oracle.driver = oracle.jdbc.driver.OracleDriver
oracle.url = "jdbc:oracle:thin:#host:#port/#sid"
oracle.username = "#username"
oracle.password = "#password"
}
ERROR:
play.api.UnexpectedException: Unexpected exception[CreationException: Unable to create injector, see the following errors:
1) No implementation for play.api.db.Database was bound.
while locating play.api.db.Database
for the 1st parameter of controllers.GetMarkersController.<init>(GetMarkersController.scala:14)
while locating controllers.GetMarkersController
for the 7th parameter of router.Routes.<init>(Routes.scala:45)
at play.api.inject.RoutesProvider$.bindingsFromConfiguration(BuiltinModule.scala:121):
Binding(class router.Routes to self) (via modules: com.google.inject.util.Modules$OverrideModule -> play.api.inject.guice.GuiceableModuleConversions$$anon$1)
GetMarkersController.scala:
package controllers
import javax.inject._
import akka.actor.ActorSystem
import play.api.Configuration
import play.api.mvc.{AbstractController, ControllerComponents}
import play.api.libs.ws._
import scala.concurrent.duration._
import scala.concurrent.{ExecutionContext, Future, Promise}
import services._
import play.api.db.Database
class GetMarkersController #Inject()(db: Database, conf: Configuration, ws: WSClient, cc: ControllerComponents, actorSystem: ActorSystem)(implicit exec: ExecutionContext) extends AbstractController(cc) {
def getMarkersValues(start_date: String, end_date: String) = Action.async {
getValues(1.second, start_date: String, end_date: String).map {
message => Ok(message)
}
}
private def getValues(delayTime: FiniteDuration, start_date: String, end_date: String): Future[String] = {
val promise: Promise[String] = Promise[String]()
val service: GetMarkersService = new GetMarkersService(db)
actorSystem.scheduler.scheduleOnce(delayTime) {
promise.success(service.get_markers(start_date, end_date))
}(actorSystem.dispatcher)
promise.future
}
}
You cannot access Oracle without credentials. You need to have an account with Oracle. Then add something like the following to your build.sbt file
resolvers += "Oracle" at "https://maven.oracle.com"
credentials += Credentials("Oracle", "maven.oracle.com", "username", "password")
More information about accessing the OTN: https://docs.oracle.com/middleware/1213/core/MAVEN/config_maven_repo.htm#MAVEN9012
If you have the hard coded jar, you don't need to include as a dependency. See unmanagedDependencies https://www.scala-sbt.org/1.x/docs/Library-Dependencies.html

Spark query sql server

i'm trying to query SQL server using Spark/scala and running into an issue
here is the code
import org.apache.spark.SparkContext
object temp {
def main(args: Array[String]) {
val conf = new SparkConf().setAppName("temp").setMaster("local")
val sc = new SparkContext(conf)
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
val jdbcSqlConnStr = "jdbc:sqlserver://XXX.XXX.XXX.XXX;databaseName=test;user=XX;password=XXXXXXX;"
val jdbcDbTable = "[test].dbo.[Persons]"
val jdbcDF = sqlContext.read.format("jdbc").options(
Map("url" -> jdbcSqlConnStr,
"dbtable" -> jdbcDbTable)).load()
jdbcDF.show(10)
println("Complete")
}
}
below is the error and i assume it is complaining about main method - but why ?how to fix it.
error:
Exception in thread "main" java.lang.NoSuchMethodError: scala.runtime.ObjectRef.create(Ljava/lang/Object;)Lscala/runtime/ObjectRef;
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:888)
at org.apache.spark.sql.SQLContext.(SQLContext.scala:70)
at apachetika.temp$.main(sqltemp.scala:24)
at apachetika.temp.main(sqltemp.scala)
18/09/28 16:04:40 INFO spark.SparkContext: Invoking stop() from shutdown hook
As far as I can tell this is due to a scala version mismatch
The library compiled with spark_core dependence with scala 2.11 instead of scala 2.10. Use scala 2.11.8+.
Hope this helps.

How to get adf-richclient-automation-11.jar compatible with the latest selenium-java 3.x

I'm develping Selenium tests for Oracle ADF application.
I'm using for that:
JDeveloper fmw_12.2.1.3.0_bpmqs installation
Very usefull library SelniumTools
And I faced with problem:
The SelniumTools based on adf-richclient-automation-11.jar which is distributed with JDeveloper (use can find it in *Oracle_Home\oracle_common\modules\oracle.adf.view* folder) and described in docs as Oracle Customized Selenium.
And everything works fine with selenium-java library up to 2.53.1 version.
But when I upgrade selenium-java library to 3.3.1 version my test project fall with exception:
org.openqa.selenium.WebDriverException: java.lang.NoSuchMethodError: org.openqa.selenium.support.ui.**WebDriverWait.until(Lcom/google/common/base/Function;)Ljava/lang/Object;** Build info: version: 'unknown', revision: '5234b32', time: '2017-03-10 09:00:17 -0800' System info: host: 'EE-LATITUDE-749', ip: '10.10.207.64', os.name: 'Windows 10', os.arch: 'amd64', os.version: '10.0', java.version: '1.8.0_172' Driver info: driver.version: unknown
com.redheap.selenium.junit.PageProvider.createPage(PageProvider.java:49)
com.redheap.selenium.junit.PageProvider.goHome(PageProvider.java:36)
ru.russvet.selenium.tests.P6_ProcessPageTest.(P6_ProcessPageTest.java:38)
java.lang.reflect.Constructor.newInstance(Constructor.java:423)
Caused by: java.lang.NoSuchMethodError: org.openqa.selenium.support.ui.WebDriverWait.until(Lcom/google/common/base/Function;)Ljava/lang/Object;
oracle.adf.view.rich.automation.selenium.RichWebDrivers.waitForServer(RichWebDrivers.java:112)
oracle.adf.view.rich.automation.selenium.RichWebDrivers.waitForRichPageToLoad(RichWebDrivers.java:175)
oracle.adf.view.rich.automation.selenium.RichWebDrivers.waitForRichPageToLoad(RichWebDrivers.java:158)
com.redheap.selenium.page.Page.(Page.java:53)
com.redheap.selenium.page.Page.(Page.java:45)
ru.russvet.selenium.pages.BPMWorkspaceLoginPage.(BPMWorkspaceLoginPage.java:19)
com.redheap.selenium.junit.PageProvider.createPage(PageProvider.java:47)
Investigations follow to the reason:
1) in selenium-java library starting from 3.3.1 interface for until function has been changed and accept Function, Predicate или Supplier classes from Java 8 library instead of Guava library https://github.com/SeleniumHQ/selenium/commit/b2aa9fd534f7afbcba319231bb4bce85f825ef09 :
-import com.google.common.base.Function;
-import com.google.common.base.Predicate;
-import com.google.common.base.Supplier;
+import java.util.function.Function;
+import java.util.function.Predicate;
+import java.util.function.Supplier;
2) what could be probably worked around with recent guava release (21+ version), where the Google versions of both Function and Predicate extend the Java 8 equivalents
So, it is look like that adf-richclient-automation-11.jar is built with selenium-java library 2.x, and that causes the exception during running the tests.
We raised it via Oracle support, but there is no information from them about new version for this library yet.
So, my questions are:
1) what is possible way to rebuild the adf-richclient-automation-11.jar to make it compatible with the latest version of selenium-java as 3.x?
2) Has anybody found newer version of adf-richclient-automation-11.jar in maybe some specific distribution of JDeveloper?
I was once upon a time part of the small team within Oracle that built the automation library you are referring to.
The issue here is API compatibility and unless the ADF automation library is repackaged against WebDriver 3.x and redistributed by Oracle OTN, you have no options but to stick to using Selenium 2.x libraries. WebDriver 3.x is meant for Java 8, which would be one more reason Oracle would want to soon upgrade to 3.x. Have you raised your concerns via OTN forums or Oracle support?
Well, answer myself. The only way for now to make this works were decompiling adf-richclient-automation-11.jar, clean and repackage it against WebDriver 3.x.
The corresponding Eclipse project is here https://github.com/EgorBEremeev/adf-richclient-automation-selenium-3-rebuild
This project does not contains test classes as original lib. I didn't run any tests from the original lib because I have no idea of required test environment for that.
So, I tested the repackaged library directly in my main project.
How ever the complete steps to get sources, clean it, fix errors and repackage library you can find in readme.md in the git repository and as well below:
Full Steps to manualy rebuilt adf-richclient-automation-11.jar:
Environment
Install Eclipse
Install Decompiler pluging
Help -> Marketplace -> Enhanced Class Decompiler
Windows -> Preferences -> Java -> Decompiler -> Default Class Decompiler: CFR -> Applay and Close
Set User Libraries
Windows -> Preferences -> Java -> Build Path -> User Libraries
New->
Name -> selenium-java-3.3.1
Add External JARs... ->
path\to\selenium-java-3.3.1\
client-combined-3.3.1-nodeps.jar
lib\*.jar
->Finish
New->
Name -> adf-richclient-automation-11.jar
Add External JARs... ->
path\to\Oracle_Home\oracle_common\modules\oracle.adf.view\
adf-richclient-automation-11.jar
->Finish
-> Applay and Close
Steps
Create Java Project
Eclipse -> New -> Java Project
Name -> project_name
JDK -> 1.8
Build Path -> Libraries -> Add Library -> User Library -> Next
User Libraries ...
selenium-java-3.3.1
adf-richclient-automation-11.jar
Decompile adf-richclient-automation-11.jar
Project Explorer -> adf-richclient-automation-11.jar -> Context Menu -> Export Sources
path\to\project_name\src\
adf-richclient-automation-11-src.zip
Project Explorer -> Refresh
src -> adf-richclient-automation-11-src.zip
Extract decompiled sources into path\to\project_name\src\
Check the src
Project Explorer -> Refresh
src -> adf-richclient-automation-11-src.zip
* oracle.adf.view.rich.automation.selenium
* oracle.adf.view.rich.automation.test
oracle.adf.view.rich.automation.test.browserfactory
* oracle.adf.view.rich.automation.test.component
* oracle.adf.view.rich.automation.test.selenium
org.openqa.selenium
org.openqa.selenium.firefox
5.1 Delete classes used for and with Selenium RC:
path/to/project_name/src/
oracle/adf/view/rich/automation/selenium/RichSelenium.java -> Delete
5.2 Delete packages oracle.adf.view.rich.automation.test.* -> Delete
oracle.adf.view.rich.automation.test
oracle.adf.view.rich.automation.test.browserfactory
oracle.adf.view.rich.automation.test.component
oracle.adf.view.rich.automation.test.selenium
Fix errors:
path/to/project_name/src/oracle/adf/view/rich/automation/selenium/RichWebDrivers.java
[] 241 Type mismatch: cannot convert from element type Object to String ->
fix 239 -> List<String> logs = (List) jsExecutor.executeScript(_GET_AND_CLEAR_LOG_MESSAGES_JS,
=
List<String> logs = (List) jsExecutor.executeScript(_GET_AND_CLEAR_LOG_MESSAGES_JS,
new Object[]{logLevel.toString().toUpperCase()});
for (String s : logs) {
sbf.append(s).append(_NEW_LINE);
}
[] 321 Type mismatch: cannot convert from element type Object to String ->
fix 320 -> Set<String> handles = webDriver.getWindowHandles();
=
public String apply(WebDriver webDriver) {
Set<String> handles = webDriver.getWindowHandles();
for (String handle : handles) {
if (openWindowHandles.contains(handle))
continue;
return handle;
}
return null;
}
Build and Export into jar
remove -> path\to\project_name\src\adf-richclient-automation-11-src.zip
Project Explorer -> Export -> Java -> JAR file -> Next
select src folder only
check Export generated classes and resources
uncheck .classpath, .project
-> Finish -> Ok in warning dialog
Optional fix error in classes from oracle.adf.view.rich.automation.test.* packages.
path/to/project_name/src/oracle/adf/view/rich/automation/test/selenium/WebDriverManager.java
[] 87 Type mismatch: cannot convert from element type Object to String ->
fix 85 Set<String> windowHandles = webDriver.getWindowHandles();
=
try {
Set<String> windowHandles = webDriver.getWindowHandles();
_LOG.fine("try to close all windows... ");
for (String handle : windowHandles) {
path/to/project_name/src/oracle/adf/view/rich/automation/test/selenium/RichWebDriverTest.java
[] 953 Syntax error on token "finally", delete this token ->
fix -> delete 956,952,949, 941
=
protected void refresh() {
_LOG.fine("Executing refresh()");
this.getWebDriver().navigate().refresh();
try {
Alert alert = this.getWebDriver().switchTo().alert();
if (alert != null) {
alert.accept();
};
}
catch (WebDriverException alert) {}
finally {
this.waitForPage();
}
}
[] 1026 Unreachable catch block for Exception. It is already handled by the catch block for Throwable ->
fix -> replace whole method by variant of Jad Decompiler->
-> Windows -> Preferences -> Java -> Decompiler -> Default Class Decompiler: Jad -> Applay and Close
-> fix 1020, 1028 Duplicate local variable cachingEnabled ->
fix-> delete
-> 1019 String msg;
-> 1018 boolean cachingEnabled;
=
protected void onShutdownBrowser() {
_LOG.finest("Shutting down browser");
try {
_logSeleniumBrowserLogAndResetLoggerLevel();
} catch (Exception e) {
boolean cachingEnabled;
String msg;
_LOG.warning("The page did not generate any logs.");
} finally {
boolean cachingEnabled = isBrowserCachingEnabled();
try {
if (cachingEnabled) {
getWebDriverManager().releaseInstance();
} else {
getWebDriverManager().destroyInstance();
}
} catch (Throwable t) {
String msg = cachingEnabled
? "Failed to release the browser. Error message: %s"
: "Failed to shutdown the browser. Error message: %s";
_LOG.severe(String.format(msg, new Object[]{t.getMessage()}));
}
}
}
[] 1047 Type mismatch: cannot convert from element type Object to WebElement ->
fix 1046 List<WebElement> allOptions = element.findElements(By.xpath((String) builder.toString()));
=
List<WebElement> allOptions = element.findElements(By.xpath((String) builder.toString()));
for (WebElement option : allOptions) {
path/to/project_name/src/oracle/adf/view/rich/automation/test/UrlFactories.java
[] 34 Type mismatch: cannot convert from UrlFactory to UrlFactories.UrlFactoryImpl ->
fix Add cast to 'UrlFactoryImpl'
=
factory = (UrlFactoryImpl) urlFactoryIterator.next();
[] 52 Type mismatch: cannot convert from UrlFactory to UrlFactories.UrlFactoryImpl
fix Add cast to 'UrlFactoryImpl'
=
UrlFactoryImpl urlFactoryImpl = (UrlFactoryImpl) (_INSTANCE = factory != null ? factory : new UrlFactoryImpl());

Resources