When using Exposed's method "createMissingTablesAndColumns", I get a NoSuchElementException - kotlin-exposed

I'm using 0.22.1 Exposed libs (core/dao/jdbc) and when I use "SchemaUtils.createMissingTablesAndColumns", my tables are created but I get an Exception.
If I relaunch the server with a new column, I get the same Exception and the update isn't performed
See the stacktrace :
Exception in thread "main" java.util.NoSuchElementException: Collection contains no element matching the predicate.
at org.jetbrains.exposed.sql.statements.jdbc.JdbcDatabaseMetadataImpl$tableConstraints$$inlined$associateWith$lambda$1.invoke(JdbcDatabaseMetadataImpl.kt:170)
at org.jetbrains.exposed.sql.statements.jdbc.JdbcDatabaseMetadataImpl$tableConstraints$$inlined$associateWith$lambda$1.invoke(JdbcDatabaseMetadataImpl.kt:13)
at org.jetbrains.exposed.sql.statements.jdbc.JdbcDatabaseMetadataImplKt.iterate(JdbcDatabaseMetadataImpl.kt:164)
at org.jetbrains.exposed.sql.statements.jdbc.JdbcDatabaseMetadataImpl.tableConstraints(JdbcDatabaseMetadataImpl.kt:123)
at org.jetbrains.exposed.sql.vendors.VendorDialect$fillConstraintCacheForTables$1.invoke(Default.kt:639)
at org.jetbrains.exposed.sql.vendors.VendorDialect$fillConstraintCacheForTables$1.invoke(Default.kt:560)
at org.jetbrains.exposed.sql.statements.jdbc.JdbcConnectionImpl.metadata(JdbcConnectionImpl.kt:47)
at org.jetbrains.exposed.sql.Database.metadata$exposed_core(Database.kt:31)
at org.jetbrains.exposed.sql.vendors.VendorDialect.fillConstraintCacheForTables(Default.kt:639)
at org.jetbrains.exposed.sql.vendors.VendorDialect.columnConstraints(Default.kt:617)
at org.jetbrains.exposed.sql.SchemaUtils.addMissingColumnsStatements(SchemaUtils.kt:145)
at org.jetbrains.exposed.sql.SchemaUtils.createMissingTablesAndColumns(SchemaUtils.kt:241)
at org.jetbrains.exposed.sql.SchemaUtils.createMissingTablesAndColumns$default(SchemaUtils.kt:229)
at repository.database.DatabaseRepository$createTableAndColumn$1.invoke(DatabaseRepository.kt:36)
at repository.database.DatabaseRepository$createTableAndColumn$1.invoke(DatabaseRepository.kt:17)
at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManagerKt$inTopLevelTransaction$1.invoke(ThreadLocalTransactionManager.kt:156)
at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManagerKt$inTopLevelTransaction$2.invoke(ThreadLocalTransactionManager.kt:197)
at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManagerKt.keepAndRestoreTransactionRefAfterRun(ThreadLocalTransactionManager.kt:205)
at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManagerKt.inTopLevelTransaction(ThreadLocalTransactionManager.kt:196)
at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManagerKt$transaction$1.invoke(ThreadLocalTransactionManager.kt:134)
at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManagerKt.keepAndRestoreTransactionRefAfterRun(ThreadLocalTransactionManager.kt:205)
at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManagerKt.transaction(ThreadLocalTransactionManager.kt:106)
at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManagerKt.transaction(ThreadLocalTransactionManager.kt:104)
at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManagerKt.transaction$default(ThreadLocalTransactionManager.kt:103)
at repository.database.DatabaseRepository.createTableAndColumn(DatabaseRepository.kt:35)
at repository.database.DatabaseRepository.createAndUpdateDB(DatabaseRepository.kt:31)
at domain.database.PrepareDatabaseUseCase.performNow(PrepareDatabaseUseCase.kt:18)
at controller.MainController.<init>(MainController.kt:18)
at controller.MainControllerKt.main(MainController.kt:10)
My tables are simple (just for test) :
import org.jetbrains.exposed.dao.id.IntIdTable
object UserTable: IntIdTable()
and
import org.jetbrains.exposed.dao.id.EntityID
import org.jetbrains.exposed.dao.id.IntIdTable
import org.jetbrains.exposed.sql.Column
object SpaceTable: IntIdTable() {
val userIdColumn: Column<EntityID<Int>> = reference("userId", UserTable.id)
}
When I try the method "SchemaUtils.create", everything is ok but, of course, when I relaunch the server with some news columns, columns aren't created
Have you an idea to how fix this ? What I doing wrong ?
Thank you

Related

How can we initialize DataChangeDetectionPolicy using .netsdk?

I have created a new index that is populated using an indexer. The indexer's datasource is a SQL view that has a Timestamp column of type datetime. Since we don't want a full reindexing each time the indexer runs, this column should be used to determine which data have changed since the last indexer run.
According to the documentation we need to create or update the datasource by setting the HighWatermarkColumnName and ODataType to the DataChangeDetectionPolicy object. The example in the documentation uses the REST API and there is also way to do it using the azure search portal directly.
However I want to do it using .netsdk and so far I haven't been able to do so. I am using Azure.Search.Documents(11.2.0 - beta.2). Here is the part of the code I use to create the datasource:
SearchIndexerDataSourceConnection CreateIndexerDataSource()
{
var ds = new SearchIndexerDataSourceConnection(DATASOURCE,
SearchIndexerDataSourceType.AzureSql,
this._datasourceConStringMaxEvents,
new SearchIndexerDataContainer(SQLVIEW));
//ds.DataChangeDetectionPolicy = new DataChangeDetectionPolicy();
return ds;
}
The commented code is what I tried to do to initialize the DataChangeDetectionPolicy but there is no ctor exposed. Am I missing something?
Thanks in advance.
Instead of using DataChangeDetectionPolicy, you will need to use HighWaterMarkChangeDetectionPolicy which is derived from DataChangeDetectionPolicy.
So your code would be something like:
ds.DataChangeDetectionPolicy = new HighWaterMarkChangeDetectionPolicy("Timestamp");

Apache Flink: Unable to convert the Table object to DataSet object

I am using the Table API on Flink 1.4.0. I have some Table objects to be convert to a DataSet of type Row. The project was built using Maven and imported on IntelliJ. I have the following code and the IDE cannot resolve the method tableenv.toDataSet() method. Please help me out. Thank you.
ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
BatchTableEnvironment tableEnvironment = TableEnvironment.getTableEnvironment(env);
...
tableEnvironment.registerTableSource("table1",csvSource);
Table table1 = tableEnvironment.scan("table1");
DataSet<Row> result = tableEnvironment.toDataSet(table1, Row.class);
The last statement causes an error
"Cannot resolve toDataSet() method"
You might not import the right BatchTableEnvironment.
Please check that you import org.apache.flink.table.api.java.BatchTableEnvironment instead of org.apache.flink.table.api.BatchTableEnvironment. The former is the common base class for the Java and Scala variants.
If you want to read a DataSet from a csv file, do it like following:
DataSet<YourType> csvInput = env.readCsvFile("hdfs:///the/CSV/file") ...
More on this: https://ci.apache.org/projects/flink/flink-docs-release-1.4/dev/batch/#data-sources

Error while generating nodes with neo4j via neo4j-console

I'm trying to put data in my graph DB using neo4j. I'm new in the field and I don't find it easy to use the batch import tool that Michael Hunger wrote.
My goal is to generate at least 10000 nodes with just one property set. So I wrote a python script that generates 10000 lines of Cypher queries like "CREATE (:label{ number : '3796142470'})".
I put them in the console and execute them but I get this exception:
StackTrace:
scala.collection.immutable.List.take(List.scala:84)
org.neo4j.cypher.internal.compiler.v2_0.ast.SingleQuery.checkOrder(Query.scala:33)
Am I doing something wrong? In case the only way to generate those nodes is to use a batch/rest API, could you suggest me a easier way to do it?
Change:
CREATE (:label{ number : '3796142470'})
to look like:
CREATE (n1:Label { number : '3796142470'})
So you are following convention:
CREATE (n:Person { name : 'Andres', title : 'Developer' })
Put them into a file (say import.txt) and then:
bin/neo4j-shell -file import.txt
See http://blog.neo4j.org/2014/01/importing-data-to-neo4j-spreadsheet-way.html for more details.

Migrating working ServiceStack to live causes Unable to cast object of type 'System.Byte' to type 'System.String'

I have developed a ServiceStack API, using ORMLite based on a SQL Server. The app works perfectly pointing at both my local SQL database and an Azure database. Happy Days!
I have now tried to move this solution to the live server which has it's own local copy of the same database and I am getting strange results. The error is:
Error Code: InvalidCastException
Message: Unable to cast object of type 'System.Byte' to type 'System.String'.
[EMEM: 1/16/2014 11:49:29 AM]: [REQUEST: {Equipment:DP112}]
System.InvalidCastException: Unable to cast object of type 'System.Byte' to type 'System.String'. at lambda_method(Closure , Object , Object ) at
ServiceStack.OrmLite.OrmLiteDialectProviderBase`1.SetDbValue(FieldDefinition fieldDef, IDataReader dataReader, Int32 colIndex, Object instance) at
ServiceStack.OrmLite.ReadExtensions.ExprConvertToList[T](IDataReader dataReader) at ServiceStack.OrmLite.OrmLiteResultsFilterExtensions.ExprConvertToList[T](IDbCommand dbCmd, String sql) at
ServiceStack.OrmLite.ReadConnectionExtensions.Exec[T](IDbConnection dbConn, Func`2 filter) at
ViewPoint.EquipmentService.Get(EMEM request) at
ServiceStack.Host.ServiceRunner`1.Execute(IRequest request, Object instance, TRequest requestDto)
I have checked the database schemas and they look identical.
This is the code that works on 2 out of the 3 databases quite happily but not the third.
public object Get(EMEM request)
{
var dbFactory = new OrmLiteConnectionFactory(WebConfigurationManager.ConnectionStrings["db"].ToString(), SqlServerDialect.Provider);
using (IDbConnection db = dbFactory.OpenDbConnection())
{
if (request.Equipment == null)
{
List<EMEM> results = db.Select<EMEM>();
return results;
}
else
{
List<EMEM> results = db.Select<EMEM>(p => p.Where(ev => ev.Equipment == request.Equipment));
return results;
}
}
}
I can literally fix the problem by pointing the connection string at the azure database which tends to suggest it's database related(?)
Extra Information:
I have also written a put method which updates a row in the database and that works fine.
On 2 of the servers EMEM is a table but on the third, where it doesn't work, it's a View.
Can anyone suggest where to start looking for this problem?
UPDATE: I have now created a View on my local development database so it should now be identical to the live database. I was expecting this to break the local dev site but it hasn't... :(
BINGO! FIXED IT!
IT WAS linked to the View, but it wasn't the View's fault....
The view was looking at a table with different data types against most of the values. The demo table I was working against had all the columns set to String!
So, look out when people give you "demo tables, with identical data to the live" to develop against.
They aren't always identical!!
HTH

How to find unique selectors for elements on pages with ExtJS for use with Selenium?

I am using Selenium with Firefox Webdriver to work with elements on a page that has unique
CSS IDs (on every page load) but the IDs change every time so I am unable to use them to locate an element. This is because the page is a web application built with ExtJS.
I am trying to use Firebug to get the element information.
I need to find a unique xPath or selector so I can select each element individually with Selenium.
When I use Firebug to copy the xPath I get a value like this:
//*[#id="ext-gen1302"]
However, the next time the page is loaded it looks like this:
//*[#id="ext-gen1595"]
On that page every element has this ID format, so the CSS ID can not be used to find the element.
I want to get the xPath that is in terms of its position in the DOM, but Firebug will only return the ID xPath since it is unique for that instance of the page.
/html/body/div[4]/div[3]/div[4]/div/div/div/span[2]/span
How can I get Firebug (or another tool that would work with similar speed) to give me a unique selector that can be used to find the element with Selenium even after the ext-gen ID changes?
Another victim of ExtJS UI automation testing, here are my tips specifically for testing ExtJS. (However, this won't answer the question described in your title)
Tip 1: Don't ever use unreadable XPath like /div[4]/div[3]/div[4]/div/div/div/span[2]/span. One tiny change of source code may lead to DOM structure change. This will cause huge maintenance costs.
Tip 2: Take advantage of meaningful auto-generated partial ids and class names.
For example, this ExtJS grid example: By.cssSelector(".x-grid-view .x-grid-table") would be handy. If there are multiple of grids, try index them or locate the identifiable ancestor, like By.cssSelector("#something-meaningful .x-grid-view .x-grid-table").
Tip 3: Create meaningful class names in the source code. ExtJS provides cls and tdCls for custom class names, so you can add cls:'testing-btn-cancel' in your source code, and get it by By.cssSelector(".testing-btn-cancel").
Tip3 is the best and the final one. If you don't have access the source code, talk to your manager, Selenium UI automation should really be a developer job for someone who can modify the source code, rather than a end-user-like tester.
I would recommend using CSS in this instance by doing By.cssSelector("span[id^='ext-gen'])
The above statement means "select a span element with an id that starts with ext-gen". (If it needs to be more specific, you can reply, and I'll see if I can help you).
Alternatively, if you want to use XPath, look at this answer: Xpath for selecting html id including random number
Although it is not desired in some cases as mentioned above, you can parse through the elements and generate xpath ids.
import java.util.LinkedHashMap;
import java.util.List;
import java.util.Map;
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
public class XPATHDriverWrapper {
Map xpathIDToWebElementMap = new LinkedHashMap();
Map webElementToXPATHIDMap = new LinkedHashMap();
public XPATHDriverWrapper(WebDriver driver){
WebElement htmlElement = driver.findElement(By.xpath("/html"));
iterateThroughChildren(htmlElement, "/html");
}
private void iterateThroughChildren(WebElement parentElement, String parentXPATH) {
Map siblingCountMap = new LinkedHashMap();
List childrenElements = parentElement.findElements(By.xpath(parentXPATH+"/*"));
for(int i=0;i<childrenElements.size(); i++) {
WebElement childElement = childrenElements.get(i);
String childTag = childElement.getTagName();
String childXPATH = constructXPATH(parentXPATH, siblingCountMap, childTag);
xpathIDToWebElementMap.put(childXPATH, childElement);
webElementToXPATHIDMap.put(childElement, childXPATH);
iterateThroughChildren(childElement, childXPATH);
// System.out.println("childXPATH:"+childXPATH);
}
}
public WebElement findWebElementFromXPATHID(String xpathID) {
return xpathIDToWebElementMap.get(xpathID);
}
public String findXPATHIDFromWebElement(WebElement webElement) {
return webElementToXPATHIDMap.get(webElement);
}
private String constructXPATH(String parentXPATH,
Map siblingCountMap, String childTag) {
Integer count = siblingCountMap.get(childTag);
if(count == null) {
count = 1;
} else {
count = count + 1;
}
siblingCountMap.put(childTag, count);
String childXPATH = parentXPATH + "/" + childTag + "[" + count + "]";
return childXPATH;
}
}
Another wrapper to generate ids from Document is posted at: http://scottizu.wordpress.com/2014/05/12/generating-unique-ids-for-webelements-via-xpath/

Resources