Sending JPA data from server to client? - database

I'm using Eclipse juno IDE.
I Have a client-server application. In the server side I have an Entity (Travels)
and I Have another class that handle the JPA queries. I'm recieving the data from the database
but when I'm trying to send it as a vector to the client i'm getting an exception in the
client side , that says "Cant cast pack.db.Travels to java.util.vector"
Here is my Code:
Entity:
package pack.db;
import java.io.Serializable;
import java.sql.Date;
import java.sql.Time;
import javax.persistence.Column;
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;
#Entity
public class Travels implements Serializable {
private static final long serialVersionUID = 1L;
#Id
#GeneratedValue(strategy=GenerationType.SEQUENCE)
#Column(name="id")
private int id;
#Column(name="taxi_number")
private String taxiNumber;
#Column(name="travel_date")
private Date travelDate;
#Column(name="travel_time")
private Time travelTime;
#Column(name="cost")
private Double travelCost;
public Travels() {
super();
}
public void setNumber(String number)
{
this.taxiNumber = number;
}
public void setDate(Date date)
{
this.travelDate = date;
}
public void setTime(Time time)
{
this.travelTime = time;
}
public void setCost(Double cost)
{
this.travelCost = cost;
}
}
QueryClass
public Vector retrieveAllTravelsData(Date[] travelDate, Time[] travelTime) {
List<Object[]> allTravels = (List<Object[]>)em.createQuery("SELECT t FROM Travels t WHERE t.travelDate between ?1 and ?2 and " +
"t.travelTime between ?3 and ?4")
.setParameter(1, travelDate[0])
.setParameter(2, travelDate[1])
.setParameter(3, travelTime[0])
.setParameter(4, travelTime[1]).getResultList();
return (Vector) allTravels;
}
So what I want to do is to send "allTravels" as a vector to the client side becasuse
I need to populate a JTABLE in the client-side. so I tried to cast the return data
from the query to OBJECT[] (because the constructor of the JTABLE need Object[][] for the rows) and send it.. but i'm still get the exception in client side that says
"Cannot cast pack.db.Travel to java.util.Vector".. i don't think that i need to add
the travel Entity in the client side.. so how can i send the data to the client?
To be more specific.. I have this code with JDBC implemntation
public Vector retrieveAllTravelsData(Date[] travelDate, Time[] travelTime) {
Vector rows_data = new Vector();
String sql = "SELECT * FROM taxis.travels " + " WHERE travel_date BETWEEN ? AND ? AND travel_time BETWEEN ? AND ?";
try {
statement = (PreparedStatement) connection.prepareStatement(sql);
statement.setDate(1, travelDate[0]);
statement.setDate(2, travelDate[1]);
statement.setTime(3, travelTime[0]);
statement.setTime(4, travelTime[1]);
rs = statement.executeQuery();
ResultSetMetaData meta = rs.getMetaData();
int cols_count = meta.getColumnCount();
while (rs.next()) {
Vector record = new Vector();
for (int i = 0; i < cols_count; i++) {
record.add(rs.getString(i+1));
}
rows_data.addElement(record);
}
} catch (SQLException e) {
while (e != null) {
e.printStackTrace();
e = e.getNextException();
}
}
return rows_data;
Here i can get each column data from each column save it as a record and then put it in the Vector. so how that can be implemented by JPA? is it possible?

Casting an object to another class doesn't magically change the type of the object. It only allows referencing it as a more concrete class. So casting a List to Vector only works if the list is indeed a Vector.
getResultList() returns a List. That's what the javadoc says. The concrete class returned depends on the JPA provider, but I'm pretty sure none of them returns a Vector, since Vector is a class that should not be used anymore, since Java 1.2.
Moreover, this particular query doesn't return Object[], but instances of Travels (which should be named Travel, BTW).
So the method should be:
public List<Travel> retrieveAllTravelsData(Date[] travelDate, Time[] travelTime) {
List<Travel> allTravels = (List<Travel>) em.createQuery("SELECT t FROM Travel t WHERE t.travelDate between ?1 and ?2 and " +
"t.travelTime between ?3 and ?4")
.setParameter(1, travelDate[0])
.setParameter(2, travelDate[1])
.setParameter(3, travelTime[0])
.setParameter(4, travelTime[1]).getResultList();
return allTravels;
}
The server shouldn't care that the client-side needs a Vector to satisfy an old Swing class. If you really need a Vector at client-side, then create one from the returned list:
Vector<Travel> travelsAsVector = new Vector(travelsAsList);

Ok i solve it like that:
public Vector retrieveAllTravelsData(Date[] travelDate, Time[] travelTime) {
javax.persistence.Query q = (javax.persistence.Query)em.createNativeQuery("SELECT *
from Travels WHERE travel_date between ?1 and ?2")
.setParameter(1, travelDate[0])
.setParameter(2, travelDate[1])
.setParameter(3, travelTime[0])
.setParameter(4, travelTime[1]);
List<Object[]> result = (List<Object[]>)q.getResultList();
Vector rows = new Vector();
for (int i = 0 ; i < result.size(); i++)
{
Vector rec = new Vector();
for (int j = 0; j < columnCount; j++)
{
rec.add(result.get(i)[j].toString()); // returns the specific column value and add it to the vector
}
rows.addElement(rec);
}
return rows;
}

Related

How to use custom edge implementation with EdmondsKarp max flow algorithm

I'm trying to implement and simulate a network where I can try some routing methods.
My problem is that one of my routing methods is require me to calculate MaxFlow/MinCut.
I have a custom implementation for the edges, where I added some new fields like Capacity.
Here is my implementation:
import org.jgrapht.graph.DefaultWeightedEdge;
import java.io.Serializable;
public class MyDefaultWeightedEdge extends DefaultWeightedEdge implements Serializable {
protected int freecapacity;
protected boolean isFeasable;
public MyDefaultWeightedEdge(){
this.isFeasable = true;
}
protected int getFreeCapacity(){return this.freecapacity;}
protected void setFreeCapacity(int i)
{
this.freecapacity = i;
}
protected boolean getFeasable(){return this.isFeasable;}
protected void setFeasable(boolean b){this.isFeasable = b;}
#Override
protected Object getSource() {
return super.getSource();
}
#Override
protected Object getTarget() {
return super.getTarget();
}
#Override
protected double getWeight(){
System.out.println("getWeight");
StackTraceElement[] stacktrace = Thread.currentThread().getStackTrace();
StackTraceElement e = stacktrace[2];//maybe this number needs to be corrected
String methodName = e.getMethodName();
if(methodName.equals(""))
{
return this.freecapacity;
}
else {
return super.getWeight();
}
}
public String toString() {
return "(" + this.getSource() + " : " + this.getTarget() + ") " + "Weight " + this.getWeight() + " Capacity " + this.getFreeCapacity();
}
}
When I try to use EdmondsKarpMFImpl my problem is that the algorithm uses the edgeweight as the capacity.
Question:
How can I use my implementation of the edge?
Question:
How can I get all of the edges which are in MinCut/MaxFlow ?
Thanks!
There's a lot of different solutions.
Standard approach. If you only have 1 type of weight (e.g. a capacity, or a cost), you could simply use a DefaultWeightedEdge and use the graph's setEdgeWeight and getEdgeWeight methods to define the edge's weight. You are free to interpret this weight in whatever way that fits your application.
public static void exampleNF(){
//Standard approach
Graph<Integer, DefaultWeightedEdge> graph = new DefaultUndirectedWeightedGraph<>(DefaultWeightedEdge.class);
Graphs.addAllVertices(graph, Arrays.asList(1,2,3,4));
Graphs.addEdge(graph, 1,2,10);
Graphs.addEdge(graph, 2,3,4);
Graphs.addEdge(graph, 2,4,3);
Graphs.addEdge(graph, 1,4,8);
Graphs.addEdge(graph, 4,3,15);
MaximumFlowAlgorithm<Integer, DefaultWeightedEdge> mf = new EdmondsKarpMFImpl<>(graph);
System.out.println(mf.getMaximumFlow(1,3));
}
Use an AsWeightedGraph. This is convenient if your graph doesn't have weights, or, if your edges have more than 1 weight (e.g. both a cost and a capacity) and you want to switch between them.
public static void exampleNF2(){
//Make an unweighted graph weighted using an AsWeightedGraph wrapper
Graph<Integer, DefaultEdge> graph = new DefaultUndirectedGraph<>(DefaultEdge.class);
Graphs.addAllVertices(graph, Arrays.asList(1,2,3,4));
DefaultEdge e1 = graph.addEdge(1,2);
DefaultEdge e2 = graph.addEdge(2,3);
DefaultEdge e3 = graph.addEdge(2,4);
DefaultEdge e4 = graph.addEdge(1,4);
DefaultEdge e5 = graph.addEdge(4,3);
Map<DefaultEdge, Double> capacities = Map.of(e1, 10.0, e2, 4.0, e3, 3.0, e4, 8.0, e5, 15.0);
MaximumFlowAlgorithm<Integer, DefaultEdge> mf = new EdmondsKarpMFImpl<>(new AsWeightedGraph<>(graph, capacities));
System.out.println(mf.getMaximumFlow(1,3));
}
Again using an AsWeightedGraph, but this time using a function as a 'pass-through' to get a specific weight stored on the arcs themselves
public static void exampleNF3(){
//Using the AsWeightedGraph as a function
Graph<Integer, MyEdge> graph = new DefaultUndirectedGraph<>(MyEdge.class);
Graphs.addAllVertices(graph, Arrays.asList(1,2,3,4));
graph.addEdge(1,2, new MyEdge(10));
graph.addEdge(2,3, new MyEdge(4));
graph.addEdge(2,4, new MyEdge(3));
graph.addEdge(1,4, new MyEdge(8));
graph.addEdge(4,3, new MyEdge(15));
MaximumFlowAlgorithm<Integer, MyEdge> mf = new EdmondsKarpMFImpl<>(new AsWeightedGraph<>(graph, e -> e.capacity, false, false));
System.out.println(mf.getMaximumFlow(1,3));
}
private static class MyEdge {
private final double capacity;
public MyEdge(double capacity){
this.capacity=capacity;
}
}
We could also implement our own custom graph and override the getEdgeWeight and setEdgeWeight methods. In this example, we use the MyEdge class from the previous example.
public static void exampleNF4(){
//Using a custom graph
MyGraph graph = new MyGraph(MyEdge.class);
Graphs.addAllVertices(graph, Arrays.asList(1,2,3,4));
graph.addEdge(1,2, new MyEdge(10));
graph.addEdge(2,3, new MyEdge(4));
graph.addEdge(2,4, new MyEdge(3));
graph.addEdge(1,4, new MyEdge(8));
graph.addEdge(4,3, new MyEdge(15));
MaximumFlowAlgorithm<Integer, MyEdge> mf = new EdmondsKarpMFImpl<>(graph);
System.out.println(mf.getMaximumFlow(1,3));
}
private static class MyGraph extends SimpleWeightedGraph<Integer, MyEdge>{
public MyGraph(Class<? extends MyEdge> edgeClass) {
super(edgeClass);
}
#Override
public double getEdgeWeight(MyEdge e){
return e.capacity;
}
}
There's probably more, but this covers quite a range of different approaches already. Personally I would not implement my own graph class unless I need it for something very specific.

Is there a way to schedule jobs to specific processor in Apache Flink?

I am a new user of Apache Flink and I am currently aiming at testing out a scheduling algorithm on a heterogeneous processing system. Hence, which processor I am deploying each job to becomes quite important. However, I could not find how I can specify the processor ID that I am deploying my jobs to, nor could I find a way to make the processors return the availability of them.
I sincerely appreciate your help if you could kindly give me some hints of how I can do these. Hope that you enjoy your day:)
I passed throgh a similar problem to schedule and monitor the flink subtasks to specific cpu cores of the machines. I use LinuxJNAAffinity to my problem (https://github.com/OpenHFT/Java-Thread-Affinity) . Maybe you can base your solution on mine. Here is one of my UDFs.
import java.util.BitSet;
import java.util.List;
import org.apache.flink.api.common.functions.RichMapFunction;
import org.apache.flink.api.java.tuple.Tuple3;
import org.apache.flink.configuration.Configuration;
import org.sense.flink.pojo.Point;
import org.sense.flink.pojo.ValenciaItem;
import org.sense.flink.util.CRSCoordinateTransformer;
import org.sense.flink.util.CpuGauge;
import org.sense.flink.util.SimpleGeographicalPolygons;
import net.openhft.affinity.impl.LinuxJNAAffinity;
public class ValenciaItemDistrictMap extends RichMapFunction<ValenciaItem, ValenciaItem> {
private static final long serialVersionUID = 624354384779615610L;
private SimpleGeographicalPolygons sgp;
private transient CpuGauge cpuGauge;
private BitSet affinity;
private boolean pinningPolicy;
public ValenciaItemDistrictMap() {
this(false);
}
public ValenciaItemDistrictMap(boolean pinningPolicy) {
this.pinningPolicy = pinningPolicy;
}
#Override
public void open(Configuration parameters) throws Exception {
super.open(parameters);
this.sgp = new SimpleGeographicalPolygons();
this.cpuGauge = new CpuGauge();
getRuntimeContext().getMetricGroup().gauge("cpu", cpuGauge);
if (this.pinningPolicy) {
// listing the cpu cores available
int nbits = Runtime.getRuntime().availableProcessors();
// pinning operator' thread to a specific cpu core
this.affinity = new BitSet(nbits);
affinity.set(((int) Thread.currentThread().getId() % nbits));
LinuxJNAAffinity.INSTANCE.setAffinity(affinity);
}
}
#Override
public ValenciaItem map(ValenciaItem value) throws Exception {
// updates the CPU core current in use
this.cpuGauge.updateValue(LinuxJNAAffinity.INSTANCE.getCpu());
System.err.println(ValenciaItemDistrictMap.class.getSimpleName() + " thread[" + Thread.currentThread().getId()
+ "] core[" + this.cpuGauge.getValue() + "]");
List<Point> coordinates = value.getCoordinates();
boolean flag = true;
int i = 0;
while (flag) {
Tuple3<Long, Long, String> adminLevel = sgp.getAdminLevel(coordinates.get(i));
if (adminLevel.f0 != null && adminLevel.f1 != null) {
value.setId(adminLevel.f0);
value.setAdminLevel(adminLevel.f1);
value.setDistrict(adminLevel.f2);
flag = false;
} else {
i++;
}
}
if (flag) {
// if we did not find a district with the given coordinate we assume the
// district 16
value.clearCoordinates();
value.addCoordinates(
new Point(724328.279007, 4374887.874634, CRSCoordinateTransformer.DEFAULT_CRS_EPSG_25830));
value.setId(16L);
value.setAdminLevel(9L);
value.setDistrict("Benicalap");
}
return value;
}
}

How to filter value which is greater than a certain point in flink?

I have two streams. First one is time-based stream and I used the countTimeWindow to receive first 10 data points for calculating stat value. I manually used the variable cnt to only keep the first window and filtered the remaining values as shown in the below code.
And then, I want to use this value to filter the main stream in order to have the values which is greater than the stat value that I computed in the window stream.
However, I don't have any idea how to merge or calculate these two streams for achieving my goal.
My scenario is that if I convert the first stat value into the broadcast variable, then I give it to the main stream so that I am able to filter the in-coming values based on the stat value in the broadcast variable.
Below is my code.
import com.sun.org.apache.xpath.internal.operations.Bool;
import org.apache.flink.api.common.functions.FilterFunction;
import org.apache.flink.api.common.functions.MapFunction;
import org.apache.flink.api.common.functions.RichMapFunction;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.streaming.api.TimeCharacteristic;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.api.windowing.time.Time;
import org.apache.flink.streaming.api.windowing.windows.GlobalWindow;
import org.apache.flink.streaming.api.windowing.windows.TimeWindow;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer09;
import org.apache.flink.streaming.util.serialization.SimpleStringSchema;
import org.apache.flink.streaming.api.functions.windowing.*;
import org.apache.flink.util.Collector;
import scala.Int;
import java.text.SimpleDateFormat;
import java.util.*;
import java.util.concurrent.TimeUnit;
public class ReadFromKafka {
static int cnt = 0;
public static void main(String[] args) throws Exception{
// create execution environment
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
Properties properties = new Properties();
properties.setProperty("bootstrap.servers", "localhost:9092");
properties.setProperty("group.id", "flink");
DataStream<String> stream = env
.addSource(new FlinkKafkaConsumer09<>("flinkStreaming11", new SimpleStringSchema(), properties));
env.enableCheckpointing(1000);
//Time based window stream
DataStream<String> process = stream.countWindowAll(10).process(new ProcessAllWindowFunction<String, Tuple2<Double, Integer>, GlobalWindow>() {
#Override
public void process(Context context, Iterable<String> iterable, Collector<Tuple2<Double, Integer>> collector) throws Exception {
Double sum = 0.0;
int n = 0;
List<Double> listDouble = new ArrayList<>();
for (String in : iterable) {
n++;
double d = Double.parseDouble(in);
sum += d;
listDouble.add(d);
}
cnt++;
Double[] sd = listDouble.toArray(new Double[listDouble.size()]);
double mean = sum / n;
double sdev = 0;
for (int i = 0; i < sd.length; ++i) {
sdev += ((sd[i] - mean) * (sd[i] - mean)) / (sd.length - 1);
}
double standardDeviation = Math.sqrt(sdev);
collector.collect(new Tuple2<Double, Integer>(mean + 3 * standardDeviation, cnt));
}
}).filter(new FilterFunction<Tuple2<Double, Integer>>() {
#Override
public boolean filter(Tuple2<Double, Integer> doubleIntegerTuple2) throws Exception {
Integer i1 = doubleIntegerTuple2.f1;
if (i1 > 1)
return false;
else
return true;
}
}).map(new RichMapFunction<Tuple2<Double, Integer>, String>() {
#Override
public String map(Tuple2<Double, Integer> doubleIntegerTuple2) throws Exception {
return String.valueOf(doubleIntegerTuple2.f0);
}
});
//I don't think that this is not a proper solution.
process.union(stream).filter(new FilterFunction<String>() {
#Override
public boolean filter(String s) throws Exception {
return false;
}
})
env.execute("InfluxDB Sink Example");
env.execute();
}
}
First, I think you only have one stream, right? There's only one Kafka-based source of doubles (encoded as Strings).
Second, if the first 10 values really do permanently define the limit for filtering, then you can just run the stream into a RichFlatMap function, where you capture the first 10 values to calculate your max value, and then filter all subsequent values (only output values >= this limit).
Note that typically you'd want to save state (array of 10 initial values, plus the limit) so that your workflow can be restarted from a checkpoint/savepoint.
If instead you are constantly re-calculating your limit from the most recent 10 values, then the code is just a bit more complex, in that you have a queue of values, and you need to do the filtering on the value being flushed from the queue when you add a new value.

mapreduce fails with message "The request to API call datastore_v3.Put() was too large."

I am running a mapreduce job over 50 million User records.
For each user I read two other Datastore entities and then stream stats for each player to bigquery.
My first dry run (with streaming to bigquery disabled) failed with the following stacktrace.
/_ah/pipeline/handleTask
com.google.appengine.tools.cloudstorage.NonRetriableException: com.google.apphosting.api.ApiProxy$RequestTooLargeException: The request to API call datastore_v3.Put() was too large.
at com.google.appengine.tools.cloudstorage.RetryHelper.doRetry(RetryHelper.java:121)
at com.google.appengine.tools.cloudstorage.RetryHelper.runWithRetries(RetryHelper.java:166)
at com.google.appengine.tools.cloudstorage.RetryHelper.runWithRetries(RetryHelper.java:157)
at com.google.appengine.tools.pipeline.impl.backend.AppEngineBackEnd.tryFiveTimes(AppEngineBackEnd.java:196)
at com.google.appengine.tools.pipeline.impl.backend.AppEngineBackEnd.saveWithJobStateCheck(AppEngineBackEnd.java:236)
I have googled this error and the only thing I find is related to that the Mapper is too big to be serialized but our Mapper has no data at all.
/**
* Adds stats for a player via streaming api.
*/
public class PlayerStatsMapper extends Mapper<Entity, Void, Void> {
private static Logger log = Logger.getLogger(PlayerStatsMapper.class.getName());
private static final long serialVersionUID = 1L;
private String dataset;
private String table;
private transient GbqUtils gbq;
public PlayerStatsMapper(String dataset, String table) {
gbq = Davinci.getComponent(GbqUtils.class);
this.dataset = dataset;
this.table = table;
}
private void readObject(java.io.ObjectInputStream in) throws IOException, ClassNotFoundException {
in.defaultReadObject();
log.info("IOC reinitating due to deserialization.");
gbq = Davinci.getComponent(GbqUtils.class);
}
#Override
public void beginShard() {
}
#Override
public void endShard() {
}
#Override
public void map(Entity value) {
if (!value.getKind().equals("User")) {
log.severe("Expected a User but got a " + value.getKind());
return;
}
User user = new User(1, value);
List<Map<String, Object>> rows = new LinkedList<Map<String, Object>>();
List<PlayerStats> playerStats = readPlayerStats(user.getUserId());
addRankings(user.getUserId(), playerStats);
for (PlayerStats ps : playerStats) {
rows.add(ps.asMap());
}
// if (rows.size() > 0)
// gbq.insert(dataset, table, rows);
}
.... private methods only
}
The maprecuce job is started with this code
MapReduceSettings settings = new MapReduceSettings().setWorkerQueueName("mrworker");
settings.setBucketName(gae.getAppName() + "-playerstats");
// #formatter:off <I, K, V, O, R>
MapReduceSpecification<Entity, Void, Void, Void, Void> spec =
MapReduceSpecification.of("Enque player stats",
new DatastoreInput("User", shardCountMappers),
new PlayerStatsMapper(dataset, "playerstats"),
Marshallers.getVoidMarshaller(),
Marshallers.getVoidMarshaller(),
NoReducer.<Void, Void, Void> create(),
NoOutput.<Void, Void> create(1));
// #formatter:on
String jobId = MapReduceJob.start(spec, settings);
Well I solved this by backing to appengine-mapreduce-0.2.jar which was the one we had used before. The one used above was appengine-mapreduce-0.5.jar which actually turned out not to work for us.
When backing to 0.2 the console _ah/pipiline/list started to work again as well!
Anyone else that have encountered similar problem with 0.5?

Comparing strings in a JDO query fails when value contains a "Comma"

I am attempting to check for an existing string using a JDO query, in my attempt to prevent the insertion of a duplicate string.
My query to check for an existing string works fine, unless the two strings I am comparing have a comma in the value. If the commas exists, the comparison bombs using "==".
For example, if I query to see if "Architecture" exists, I get the right result (Horrray!).
If I attempt to see if "Architecture, Engineering, and Drafting" exists, and it does, the query comes back and says an identical value does not exist (Boo!).
The code I'm using is as follows:
Called from the RPC
public void addCommas()
{
final Industry e = new Industry();
e.setIndustryName("Architecture, Engineering, and Drafting");
persist(e);
}
public void addNoCommas()
{
final Industry e = new Industry();
e.setIndustryName("Architecture");
persist(e);
}
Persist Operation
private void persist(Industry industry)
{
if (industryNameExists(industry.getIndustryName()))
{
return;
}
final PersistenceManager pm = PMF.get().getPersistenceManager();
pm.currentTransaction().begin();
try
{
pm.makePersistent(industry);
pm.flush();
pm.currentTransaction().commit();
} catch (final Exception ex)
{
throw new RuntimeException(ex);
} finally
{
if (pm.currentTransaction().isActive())
{
pm.currentTransaction().rollback();
}
pm.close();
}
}
Query
public static boolean industryNameExists(final String industryName)
{
final PersistenceManager pm = PMF.get().getPersistenceManager();
Query q = null;
q = pm.newQuery(Industry.class);
q.setFilter("industryName == industryNameParam");
q.declareParameters(String.class.getName() + " industryNameParam");
final List<Industry> industry = (List<Industry>) q.execute(industryName.getBytes());
boolean exists = !industry.isEmpty();
if (q != null)
{
q.closeAll();
}
pm.close();
return exists;
}
JDO Entity
#PersistenceCapable(detachable = "true")
public class Industry implements StoreCallback
{
#NotNull(message = "Industry Name is required.")
private String industryName;
#Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY)
#PrimaryKey
private Key key;
public Industry()
{
super();
}
public Key getIndustryKey()
{
return key;
}
public String getIndustryName()
{
return industryName;
}
#Override
public void jdoPreStore()
{
if (industryName != null)
{
industryName = industryName.trim();
}
}
public void setIndustryName(final String industryName)
{
this.industryName = industryName;
}
}
Any thoughts on a resolution or pinpointing an oversight would be very much appreciated.
Cheerio.
So you're calling industryNameExists("Architecture, Engineering, and Drafting") and trying to match a JDO with industryName exactly equal "Architecture, Engineering, and Drafting"?
Assuming you don't have any typo or space difference the only thing suspect is the getBytes(). Try the following:
Query q = pm.newQuery(Industry.class, "this.industryName == :industryNameParam");
List<Industry> industry = (List<Industry>) q.execute(industryName);
You can also try variation filters like "this.industryName.equalsIgnoreCase(:industryNameParam)" and "this.industryName.startWith(:industryNameParam)" to troubleshoot.
If it still does not work, try logging the SQL generated for review and compare with a hand-written query that works.

Resources