I canceled a flink job with a savepoint, then tried to restore the job with the savepoint (just using the same jar file) but it said it cannot map savepoint state. I was just using the same jar file so I think the execution plan should be the same? Why would it have a new operator id if I didn't change the code? I wonder if it's possible to restore from savepoint for a job using Kafka connector & Table API.
Related errors:
used by: java.util.concurrent.CompletionException: java.lang.IllegalStateException: Failed to rollback to checkpoint/savepoint file:/root/flink-savepoints/savepoint-5f285c-c2749410db07. Cannot map checkpoint/savepoint state for operator dd5fc1f28f42d777f818e2e8ea18c331 to the new program, because the operator is not available in the new program. If you want to allow to skip this, you can set the --allowNonRestoredState option on the CLI.
used by: java.lang.IllegalStateException: Failed to rollback to checkpoint/savepoint file:/root/flink-savepoints/savepoint-5f285c-c2749410db07. Cannot map checkpoint/savepoint state for operator dd5fc1f28f42d777f818e2e8ea18c331 to the new program, because the operator is not available in the new program. If you want to allow to skip this, you can set the --allowNonRestoredState option on the CLI.
My Code:
public final class FlinkJob {
public static void main(String[] args) {
final String JOB_NAME = "FlinkJob";
final EnvironmentSettings settings = EnvironmentSettings.inStreamingMode();
final TableEnvironment tEnv = TableEnvironment.create(settings);
tEnv.getConfig().set("pipeline.name", JOB_NAME);
tEnv.getConfig().setLocalTimeZone(ZoneId.of("UTC"));
tEnv.executeSql("CREATE TEMPORARY TABLE ApiLog (" +
" `_timestamp` TIMESTAMP(3) METADATA FROM 'timestamp' VIRTUAL," +
" `_partition` INT METADATA FROM 'partition' VIRTUAL," +
" `_offset` BIGINT METADATA FROM 'offset' VIRTUAL," +
" `Data` STRING," +
" `Action` STRING," +
" `ProduceDateTime` TIMESTAMP_LTZ(6)," +
" `OffSet` INT" +
") WITH (" +
" 'connector' = 'kafka'," +
" 'topic' = 'api.log'," +
" 'properties.group.id' = 'flink'," +
" 'properties.bootstrap.servers' = '<mykafkahost...>'," +
" 'format' = 'json'," +
" 'json.timestamp-format.standard' = 'ISO-8601'" +
")");
tEnv.executeSql("CREATE TABLE print_table (" +
" `_timestamp` TIMESTAMP(3)," +
" `_partition` INT," +
" `_offset` BIGINT," +
" `Data` STRING," +
" `Action` STRING," +
" `ProduceDateTime` TIMESTAMP(6)," +
" `OffSet` INT" +
") WITH ('connector' = 'print')");
tEnv.executeSql("INSERT INTO print_table" +
" SELECT * FROM ApiLog");
}
}
I have a calculator and the results are shown in a label. Is it possible to set the result values (some string, some double) to display in bold?
My code looks like this:
...{
label2.Content = "your time: " + saldoMin +
" and: " + fooNeg +
" " + inH +
" : " + inMin +
" [h : min]\nyour factor: " + YourFactor +
"\n\ngo at: " + beginnH +
" : " + fooNull;
}
and I only want the objects saldoMin, fooNeg, inH, ... to be bold, but not the code behind.
You can use a TextBlock with Runs. Here is an example:
var text = new TextBlock();
text.Inlines.Add(new Bold(new Run("Bold:")));
text.Inlines.Add(new Run(" nonbold"));
label2.Content = text;
I am getting an error related to setRowTypeInfo for a JDBCInputFormat. The error is below. Clearly the Tuple2 type of the DataSet doesn't like the RowTypeInfo of the JDBCInputFormat but I can't find anywhere that provides clarification on how to define the format.
[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-compiler-plugin:3.1:compile
(default-compile) on project flink: Compilation failure [ERROR]
/Users/rocadmin/Desktop/flink/flink/src/main/java/svalarms/BatchJob.java:[125,48]
incompatible types: inferred type does not conform to equality
constraint(s) [ERROR] inferred:
org.apache.flink.api.java.tuple.Tuple2
[ERROR] equality constraints(s):
org.apache.flink.api.java.tuple.Tuple2,org.apache.flink.types.Row
[ERROR] [ERROR] -> [Help 1]
DataSet< Tuple2<Integer, Integer> > dbData =
env.createInput(
JDBCInputFormat.buildJDBCInputFormat()
.setDrivername("oracle.jdbc.driver.OracleDriver")
.setDBUrl("jdbc:oracle:thin:#//[ip]:1521/sdmprd")
.setQuery("" +
"SELECT T2.work_order_nbr, T2.work_order_nbr " +
"FROM sdm.work_order_master T2 " +
"WHERE " +
"TO_DATE(T2.date_entered + 19000000,'yyymmdd') >= CURRENT_DATE - 14 " +
"AND T2.W_O_TYPE = 'TC' " +
"AND T2.OFFICE_ONLY_FLG = 'N' " +
"")
.setRowTypeInfo(new RowTypeInfo(BasicTypeInfo.INT_TYPE_INFO, BasicTypeInfo.INT_TYPE_INFO))
.finish()
);
A JDBCInputFormat returns records of type Row. Hence, the resulting DataSet should be typed to Row, i.e.,
DataSet<Row> dbData =
env.createInput(
JDBCInputFormat.buildJDBCInputFormat()
.setDrivername("oracle.jdbc.driver.OracleDriver")
.setDBUrl("jdbc:oracle:thin:#//[ip]:1521/sdmprd")
.setQuery(
"SELECT T2.work_order_nbr, T2.work_order_nbr " +
"FROM sdm.work_order_master T2 " +
"WHERE " +
"TO_DATE(T2.date_entered + 19000000,'yyymmdd') >= CURRENT_DATE - 14 " +
"AND T2.W_O_TYPE = 'TC' " +
"AND T2.OFFICE_ONLY_FLG = 'N' "
)
.setRowTypeInfo(Types.ROW(Types.INT, Types.INT))
.finish()
);
got it going
TypeInformation[] fieldTypes = new TypeInformation[] {
BasicTypeInfo.BIG_DEC_TYPE_INFO,
BasicTypeInfo.BIG_DEC_TYPE_INFO
};
RowTypeInfo rowTypeInfo = new RowTypeInfo(fieldTypes);
JDBCInputFormatBuilder inputBuilder =
JDBCInputFormat.buildJDBCInputFormat().setDrivername("oracle.jdbc.driver.OracleDriver").setDBUrl("jdbc:oracle:thin:#//ipaddress:1521/sdmprd").setQuery("" +
"SELECT T2.work_order_nbr , T2.work_order_nbr " +
"FROM sdm.work_order_master T2 " +
"WHERE " +
"TO_DATE(T2.date_entered + 19000000,'yyyymmdd') >= CURRENT_DATE - 14 " +
"AND T2.W_O_TYPE = 'TC' " +
"AND T2.OFFICE_ONLY_FLG = 'N' " +
"").setRowTypeInfo(rowTypeInfo).setUsername(“user”).setPassword(“pass”);
DataSet<Row> source = env.createInput(inputBuilder.finish());
So I've changed my codes database from h2 to postgresql and I've noticed that the Inner Join call that is used in h2 is not giving the same results when I call it in postgresql. I've done research and after testing, I found out that the left join and other joins work perfectly it's only inner join giving me a different result. So, to get both output csv files to match, would I have to change the whole structure of the table or is there something similiar in postgresql that i'm overlooking?
public void doAllWork(int type, Connection conn, Statement st) {
try {
if (type == 1) {
st.execute("DROP TABLE IF EXISTS COMBINEDDATA;"); //USING DISTINCT TO EXCLUDE DUPLICATE RECORDS
st.execute("ANALYZE");
st.execute("CREATE TABLE COMBINEDDATA AS \n"
+ "SELECT DISTINCT E.DATA1, E.DATA2, E.DATA3, E.DATA4, E.DATA5, E.DATA6, \n"
+ "E.DATA7, E.DATA8, E.DATA9, E.DATA10, E.DATA11, E.DATA12, E.DATA13, E.DATA14, E.DATA15, E.DATA16, E.DATA17, \n"
+ "E.DATA18, E.DATA19, E.DATA21, E.DATA26, E.DATA27, E.DATA28, E.DATA29, \n"
+ "E.DATA30, E.DATA31, E.DATA32, E.DATA34, E.DATA35, E.DATA36, E.DATA37, E.DATA38, \n"
+ " C.CHAIN20, C.CHAIN33, C.CHAIN22, \n "
+ "D.DAT2, D.DAT3, D.DAT4, D.DAT7, D.DAT11, D.DAT9, D.DAT5, \n "
+ "E.DATA39, E.DATA40, E.DATA41 FROM rawData AS E \n"
+ "RIGHT JOIN CHAINDATA AS C \n"
+ "ON E.DATA7 = c.CHAIN2\n"
+ "AND E.DATA11 = c.CHAIN4\n"
+ "AND E.DATA21 = c.CHAIN10\n"
+ "AND E.DATA22 = c.CHAIN11\n"
+ "RIGHT JOIN DATDATA AS D\n"
+ "ON E.DATA7 = D.DAT18\n"
+ "AND E.DATA11 = D.DAT21\n"
+ "AND UCASE(E.DATA6) = UCASE(D.DAT17)\n"
+ "AND UCASE(E.DATA10) = UCASE(D.DAT20)\n"
+ "AND UCASE(E.DATA5) = UCASE(D.DAT16)\n"
+ "AND UCASE(E.DATA9) = UCASE(D.DAT19)\n"
+ "AND E.DATA20 = D.DAT22");
} else if (type == 2) {
st.execute("DROP TABLE IF EXISTS COMBINEDDATA2;");
st.execute("ANALYZE");
st.execute("CREATE TABLE COMBINEDDATA2 AS \n"
+ "SELECT DISTINCT E.DATA1, E.DATA2, E.DATA3, E.DATA4, E.DATA5, E.DATA6, \n"
+ "E.DATA7, E.DATA8, E.DATA9, E.DATA10, E.DATA11, E.DATA12, E.DATA13, E.DATA14, E.DATA15, E.DATA16, E.DATA17, \n"
+ "E.DATA18, E.DATA19, E.DATA21, E.DATA26, E.DATA27, E.DATA28, E.DATA29, \n"
+ "E.DATA30, E.DATA31, E.DATA32, E.DATA34, E.DATA35, E.DATA36, E.DATA37, E.DATA38, \n"
+ " C.CHAIN20, C.CHAIN33, C.CHAIN22, \n "
+ "D.DAT2, D.DAT3, D.DAT4, D.DAT7, D.DAT11, D.DAT9, D.DAT5, \n "
+ "E.DATA39, E.DATA40, E.DATA41 FROM rawData AS E \n"
+ "LEFT JOIN CHAINDATA AS C \n"
+ "ON E.DATA7 = c.CHAIN2\n"
+ "AND E.DATA11 = c.CHAIN4\n"
+ "AND E.DATA21 = c.CHAIN10\n"
+ "AND E.DATA22 = c.CHAIN11\n"
+ "LEFT JOIN DATDATA AS D\n"
+ "ON E.DATA7 = D.DAT18\n"
+ "AND E.DATA11 = D.DAT21\n"
+ "AND UCASE(E.DATA6) = UCASE(D.DAT17)\n"
+ "AND UCASE(E.DATA10) = UCASE(D.DAT20)\n"
+ "AND UCASE(E.DATA5) = UCASE(D.DAT16)\n"
+ "AND UCASE(E.DATA9) = UCASE(D.DAT19)\n"
+ "AND E.DATA20 = D.DAT22");
}
System.out.println("here");
if (type == 1) {
String dir = System.getProperty("user.dir");
st.executeUpdate("CALL CSVWRITE('" + dir + "\\OnlyMatching.csv', 'SELECT * FROM COMBINEDDATA','charset=UTF-8');"); //,
} else if (type == 2) {
String dir = System.getProperty("user.dir");
st.executeUpdate("CALL CSVWRITE('" + dir + "\\AllNonMatching.csv', 'SELECT * FROM COMBINEDDATA2','charset=UTF-8');");
}
} catch (Exception ex) {
Logger.getLogger(RyderCombinerGUI.class.getName()).log(Level.SEVERE, null, ex);
}
}
In the above snippet, the second loop with the left join works the same on h2 and postgresql, but the inner join loop returns something different.
Ex)
This is the output csv file using the h2 database.
And this is the output using postgresql database
Thanks in advance.
Assuming that you run the same ANSI-compliant query, with the same underlying data, in both H2 and Postgres, you should get the same result. There is nothing whatosever different about the behavior of INNER JOIN in either database.
But a quick search for ORDER BY in your code dump revealed that you are not doing any ordering in your queries. I noticed that Postgres coincidentally appears to be sorting on the data1 column, while H2 does not appear to be sorting at all. I suggest that the result sets are identical from the point of view of unsorted sets.
In general, if you expect a cetain ordering in your result set, you need to use ORDER BY in the query which generates that data. So if you add ORDER BY data1 to both queries, I expect the results will appear the same for both H2 and Postgres.
i had one row from database
when i view this data on reporting service i get view:
baris 1
baris 2
i want to change view on reporting service to be
baris 1
baris 2
anyone can help please?
I would use a combination of the Split function with the constant: Constants.vbCrLf. In your scenario the Split might be tricky - there's no obvious delimiter to split on.
This example might get you started:
= "1. " + Split ( Fields!my_column.Value , " " )(0) + " " + Split ( Fields!my_column.Value , " " )(1)
+ Constants.vbCrLf
+ "2. " + Split ( Fields!my_column.Value , " " )(2) + " " + Split ( Fields!my_column.Value , " " )(3)