Nullpointerexception throws when inserting entity using Auto-generated Classendpoint insert method - google-app-engine

I am confused to using auto-generated endpoint class. I want to use generated endpoint to insert new object into datastore. But, an exception is throwing.
fooEndpoint.insertFoo(foo); // throws null pointer exception
My entity class is similar with the given example at this source: https://developers.google.com/appengine/docs/java/datastore/jpa/overview.
Here is my entity:
#Entity
public class Foo {
#Id
#GeneratedValue(strategy=GenerationType.IDENTITY)
private Key ID;
Here is the stack trace:
java.lang.NullPointerException
at org.datanucleus.api.jpa.JPAEntityManager.find(JPAEntityManager.java:318)
at org.datanucleus.api.jpa.JPAEntityManager.find(JPAEntityManager.java:256)
at com.FooEndpoint.containsFoo(FooEndpoint.java:150)
at com.FooEndpoint.insertFoo(FooEndpoint.java:96)
On the other side, I can insert new object when I use the EntityManager persist method. Because, this does not check exist or not on the datastore.
I expect that, classEndpoint insert method should save the object and assing auto key to ID field.
Or I need to initialize the ID field.
Here is auto-generated endpoint class insertFoo method.
/**
* This inserts a new entity into App Engine datastore. If the entity already
* exists in the datastore, an exception is thrown.
* It uses HTTP POST method.
*
* #param foo the entity to be inserted.
* #return The inserted entity.
*/
public Foo insertFoo(Foo foo) {
EntityManager mgr = getEntityManager();
try {
if (containsFoo(foo)) {
throw new EntityExistsException("Object already exists");
}
mgr.persist(foo);
} finally {
mgr.close();
}
return foo;
}
Here is the containsFoo method
private boolean containsFoo(Foo foo) {
EntityManager mgr = getEntityManager();
boolean contains = true;
try {
Foo item = mgr.find(Foo.class, foo.getID()); // exception occurs here
if (item == null) {
contains = false;
}
} finally {
mgr.close();
}
return contains;
}
foo.getID() is null. Because, it is new object. I am expecting that, app engine creates a key for it. Or I need to explicitly create a key for it?
Other fields in Foo class are simple types such as String and booleans.
Thanks for your time.

I had exactly the same problem.
I will present the way I worked around it.
Original auto-generated Endpoints class relevant code:
private boolean containsFoo(Foo foo) {
EntityManager mgr = getEntityManager();
boolean contains = true;
try {
Foo item = mgr.find(Foo.class, foo.getID());
if (item == null) {
contains = false;
}
} finally {
mgr.close();
}
return contains;
}
Changed relevant code to include a null check for the entity object that is passed as an argument.
private boolean containsFoo(Foo foo) {
EntityManager mgr = getEntityManager();
boolean contains = true;
try {
// If no ID was set, the entity doesn't exist yet.
if(foo.getID() == null)
return false;
Foo item = mgr.find(Foo.class, foo.getID());
if (item == null) {
contains = false;
}
} finally {
mgr.close();
}
return contains;
}
This way it will work as supposed, although I'm confident that more experienced answers and explanations will appear.

I was having the same exact problem after using the Eclipse Plugin to autogenerate the cloud endpoints (by selecting "Google > Generate Cloud Endpoint Class").
Following your advice, I added:
if(foo.getID() == null) // replace foo with the name of your own object
return false;
The problem was solved.
How is that Google hasn't updated the autogenerated code yet as this must be a highly recurring issue?
Thanks for the solution.

Related

Populating a table from a file only last column is populated JavaFX [duplicate]

This has baffled me for a while now and I cannot seem to get the grasp of it. I'm using Cell Value Factory to populate a simple one column table and it does not populate in the table.
It does and I click the rows that are populated but I do not see any values in them- in this case String values. [I just edited this to make it clearer]
I have a different project under which it works under the same kind of data model. What am I doing wrong?
Here's the code. The commented code at the end seems to work though. I've checked to see if the usual mistakes- creating a new column instance or a new tableview instance, are there. Nothing. Please help!
//Simple Data Model
Stock.java
public class Stock {
private SimpleStringProperty stockTicker;
public Stock(String stockTicker) {
this.stockTicker = new SimpleStringProperty(stockTicker);
}
public String getstockTicker() {
return stockTicker.get();
}
public void setstockTicker(String stockticker) {
stockTicker.set(stockticker);
}
}
//Controller class
MainGuiController.java
private ObservableList<Stock> data;
#FXML
private TableView<Stock> stockTableView;// = new TableView<>(data);
#FXML
private TableColumn<Stock, String> tickerCol;
private void setTickersToCol() {
try {
Statement stmt = conn.createStatement();//conn is defined and works
ResultSet rsltset = stmt.executeQuery("SELECT ticker FROM tickerlist order by ticker");
data = FXCollections.observableArrayList();
Stock stockInstance;
while (rsltset.next()) {
stockInstance = new Stock(rsltset.getString(1).toUpperCase());
data.add(stockInstance);
}
} catch (SQLException ex) {
Logger.getLogger(WriteToFile.class.getName()).log(Level.SEVERE, null, ex);
System.out.println("Connection Failed! Check output console");
}
tickerCol.setCellValueFactory(new PropertyValueFactory<Stock,String>("stockTicker"));
stockTableView.setItems(data);
}
/*THIS, ON THE OTHER HAND, WORKS*/
/*Callback<CellDataFeatures<Stock, String>, ObservableValue<String>> cellDataFeat =
new Callback<CellDataFeatures<Stock, String>, ObservableValue<String>>() {
#Override
public ObservableValue<String> call(CellDataFeatures<Stock, String> p) {
return new SimpleStringProperty(p.getValue().getstockTicker());
}
};*/
Suggested solution (use a Lambda, not a PropertyValueFactory)
Instead of:
aColumn.setCellValueFactory(new PropertyValueFactory<Appointment,LocalDate>("date"));
Write:
aColumn.setCellValueFactory(cellData -> cellData.getValue().dateProperty());
For more information, see this answer:
Java: setCellValuefactory; Lambda vs. PropertyValueFactory; advantages/disadvantages
Solution using PropertyValueFactory
The lambda solution outlined above is preferred, but if you wish to use PropertyValueFactory, this alternate solution provides information on that.
How to Fix It
The case of your getter and setter methods are wrong.
getstockTicker should be getStockTicker
setstockTicker should be setStockTicker
Some Background Information
Your PropertyValueFactory remains the same with:
new PropertyValueFactory<Stock,String>("stockTicker")
The naming convention will seem more obvious when you also add a property accessor to your Stock class:
public class Stock {
private SimpleStringProperty stockTicker;
public Stock(String stockTicker) {
this.stockTicker = new SimpleStringProperty(stockTicker);
}
public String getStockTicker() {
return stockTicker.get();
}
public void setStockTicker(String stockticker) {
stockTicker.set(stockticker);
}
public StringProperty stockTickerProperty() {
return stockTicker;
}
}
The PropertyValueFactory uses reflection to find the relevant accessors (these should be public). First, it will try to use the stockTickerProperty accessor and, if that is not present fall back to getters and setters. Providing a property accessor is recommended as then you will automatically enable your table to observe the property in the underlying model, dynamically updating its data as the underlying model changes.
put the Getter and Setter method in you data class for all the elements.

How can I make Dapper.NET throw when result set has unmapped columns?

Using the example code below as context... When I run this query I get the 'Id' field coming back as default value (which is 0 for an int). I would like to tell dapper to run in a manner where it would throw an exception if there is a column in the result set that does not get mapped to a property on my result object. (I understand that the issue is just that I need to remove the extra 'd' in the SQL query but I'm interested in having this expose itself more explicitly)
I've been unable to find anything on this topic. Please let me know if this is even possible with Dapper.
Thanks in advance (besides this issue, and for anyone who hasn't taken the plunge, Dapper really is the greatest thing since sliced bread!).
class CustomerRecord
{
public int Id { get; set; }
public string Name { get; set; }
}
CustomerRecord[] GetCustomerRecords()
{
CustomerRecord[] ret;
var sql = #"SELECT
CustomerRecordId AS Idd,
CustomerName as Name
FROM CustomerRecord";
using (var connection = new SqlConnection(this.connectionString))
{
ret = connection.Query<CustomerRecord>(sql).ToArray();
}
return ret;
}
You could create your own type map where you use Dapper's DefaultTypeMap and throw an exception when it cannot find the member:
public class ThrowWhenNullTypeMap<T> : SqlMapper.ITypeMap
{
private readonly SqlMapper.ITypeMap _defaultTypeMap = new DefaultTypeMap(typeof(T));
public ConstructorInfo FindConstructor(string[] names, Type[] types)
{
return _defaultTypeMap.FindConstructor(names, types);
}
public ConstructorInfo FindExplicitConstructor()
{
return _defaultTypeMap.FindExplicitConstructor();
}
public SqlMapper.IMemberMap GetConstructorParameter(ConstructorInfo constructor, string columnName)
{
return _defaultTypeMap.GetConstructorParameter(constructor, columnName);
}
public SqlMapper.IMemberMap GetMember(string columnName)
{
var member = _defaultTypeMap.GetMember(columnName);
if (member == null)
{
throw new Exception();
}
return member;
}
}
Downside of this, is that you have to configure all the type maps for every entity:
SqlMapper.SetTypeMap(typeof(CustomerRecord), typeof(ThrowWhenNullTypeMap<CustomerRecord>));
This could be configured using reflection, however.
I came here after I solved this same problem for the IEnumerable<dynamic> methods in Dapper. Then I found the proposal to solve the issue for Query<T>; but that doesn't seem to be going anywhere.
My answer builds on the answer proposed by #HenkMollema, and uses his class in the solution, so credit to him for that...
To solve the IEnumerable<dynamic> scenario, I had created a "SafeDynamic" class (follow the link above to see that). I refactored the static "Create" method into an extension method:
public static class EnumerableDynamicExtensions
{
public static IEnumerable<dynamic> Safe(this IEnumerable<dynamic> rows)
{
return rows.Select(x => new SafeDynamic(x));
}
}
and then I created a DapperExtensions class to provide 'Safe' versions of Query and Read (Read is used after QueryMultiple), to give me...
internal static class DapperExtensions
{
public static IEnumerable<dynamic> SafeQuery(this IDbConnection cnn, string sql, object param = null, IDbTransaction transaction = null, bool buffered = true, int? commandTimeout = default(int?), CommandType? commandType = default(CommandType?))
{
return cnn.Query(sql, param, transaction, buffered, commandTimeout, commandType).Safe();
}
public static IEnumerable<dynamic> SafeRead(this SqlMapper.GridReader gridReader, bool buffered = true)
{
return gridReader.Read(buffered).Safe();
}
}
So to solve this issue I added a "SafeQuery<T>" method to DapperExtensions, which takes care of setting up that type mapping for you:
private static readonly IDictionary<Type, object> TypesThatHaveMapper = new Dictionary<Type, object>();
public static IEnumerable<T> SafeQuery<T>(this IDbConnection cnn, string sql, object param = null, IDbTransaction transaction = null, bool buffered = true, int? commandTimeout = default(int?), CommandType? commandType = default(CommandType?))
{
if (TypesThatHaveMapper.ContainsKey(typeof(T)) == false)
{
SqlMapper.SetTypeMap(typeof(T), new ThrowWhenNullTypeMap<T>());
TypesThatHaveMapper.Add(typeof(T), null);
}
return cnn.Query<T>(sql, param, transaction, buffered, commandTimeout, commandType);
}
So if the original poster changes the call to Query to become SafeQuery, it should do what he requested
Edit 25/1/17
Improvements to avoid threading issues on the static dictionary:
private static readonly ConcurrentDictionary<Type, object> TypesThatHaveMapper = new ConcurrentDictionary<Type, object>();
public static IEnumerable<T> SafeQuery<T>(this IDbConnection cnn, string sql, object param = null, IDbTransaction transaction = null, bool buffered = true, int? commandTimeout = default(int?), CommandType? commandType = default(CommandType?))
{
TypesThatHaveMapper.AddOrUpdate(typeof(T), AddValue, UpdateValue);
return cnn.Query<T>(sql, param, transaction, buffered, commandTimeout, commandType);
}
private static object AddValue(Type type)
{
SqlMapper.SetTypeMap(type, XXX); // Apologies... XXX is left to the reader, as my implementation has moved on significantly.
return null;
}
private static object UpdateValue(Type type, object existingValue)
{
return null;
}
I'd like to expand on #Richardissimo 's answer by providing a visual studio project that includes his "SafeQuery" extention to Dapper, wrapped up nice and neat and tested.
https://github.com/LarrySmith-1437/SafeDapper
I use this in all my projects now to help keep the DAL clean of mismapped data, and felt the need to share. I would have posted up a Nuget, but the dependency on Dapper itself makes it much easier to post the project where consumers can update the reference to the Dapper version they want. Consume in good health, all.
Based on this thread and some other resources on SO, I've created an extension method without any custom mapper. What I needed was to throw when some property of my DTO was not set because for example SQL query has some column missing in SELECT statement.
This way my DTO would be set with default property silently and that's kinda dangerous.
The code can be simplified a little by not checking firstly for all properties being present in result, but throwing exception in the last Select call where we could iterate through properties of our type and check if query result has this property as well.
public static class Extensions
{
public static async Task<IEnumerable<T>> SafeQueryAsync<T>(
this IDbConnection cnn,
string sql,
object param = null,
IDbTransaction transaction = null,
int? commandTimeout = default(int?),
CommandType? commandType = default(CommandType?))
where T : new()
{
Dictionary<string, PropertyInfo> propertySetters = typeof(T)
.GetProperties().Where(p => p.CanRead && p.CanWrite)
.ToDictionary(p => p.Name.ToLowerInvariant(), p => p);
HashSet<string> typeProperties = propertySetters
.Select(p => p.Key)
.ToHashSet();
var rows = (await cnn.QueryAsync(sql, param, transaction, commandTimeout, commandType)).ToArray();
if (!rows.Any())
{
return Enumerable.Empty<T>();
}
var firstRow = rows.First();
HashSet<string> rowColumns = ((IDictionary<string, object>) firstRow)
.Select(kvp=>kvp.Key.ToLowerInvariant()).ToHashSet();
var notMappedColumns = typeProperties.Except(rowColumns).ToArray();
if (notMappedColumns.Any())
{
throw new InvalidOperationException(
$"Not all type properties had corresponding columns in SQL query. Query result lacks [{string.Join(", ", notMappedColumns)}]");
}
return rows.Select(row =>
{
IDictionary<string, object> rowDict = (IDictionary<string, object>) row;
T instance = new T();
rowDict.Where(o => propertySetters.ContainsKey(o.Key.ToLowerInvariant()))
.ToList().ForEach(o => propertySetters[o.Key.ToLowerInvariant()].SetValue(instance, o.Value));
return instance;
}).AsEnumerable();
}
}

Objectify Delete doesn't seem to be working

I'm trying to delete an entity from my datastore using objectify but doesn't seem to be deleted even after shutting down the instance and restarting it. This is what the entity looks like in the datastore (both when it's on the production server & dev server):
This is the code i'm using to try and delete it:
#ApiMethod(name = "deleteDataVersion")
public Result deleteDataVersion(#Named("id") String id) {
// Where id is the id of the entity in the datastore.
if (id != null && !id.equals("")) {
ofy().delete().type(DataVersion.class).id(id).now();
return new Result(Result.STATUS_SUCCESS);
} else
return new Result(Result.STATUS_FAILED);
}
I've also tried this code:
#ApiMethod(name = "deleteDataVersion")
public Result deleteDataVersion(#Named("id") String id) {
if (id != null && !id.equals("")) {
// DataVersion doesn't have a parent.
Key<DataVersion> key = Key.create(null, DataVersion.class, id);
ofy().delete().key(key).now();
return new Result(Result.STATUS_SUCCESS);
} else
return new Result(Result.STATUS_FAILED);
}
But the entity never gets deleted. This is the code for my entity:
#Entity
public class DataVersion {
#Id
private Long id;
String folderName;
#Index
String effective;
public DataVersion() {
}
public DataVersion(String folderName, String effective ) {
this.folderName= folderName;
this.effective = effective;
}
// Getters & setters..
}
I just can't seem to find the problem :( Any help would be greatly appreciated! I'm sure it's something minor I'm overlooking (fairly new to Objectify/AppEngine).
The ID you have in parameter in your Endpoint is a String, and you try to delete the object DataVersion where the ID is a Long.
ofy().delete().type(DataVersion.class).id(Long.valueOf(id)).now();
would work better !
First get the key.
Key<DataVersion> key = Key.create(null, DataVersion.class, id);
Then fetch the entity from the database using the key.
DataVersion dataVersion = ofy().load().key(key).now();
Then delete the entity using objectify.
ofy().delete().entity(dataVersion).now();

How to know the order of update with Domain context SubmitChanges?

Suppose I have 3 entities generated from EF, say tab1, tab2 and tab3. In SL app, I call SubmitChanges to save data to DB, all changes will be process by WCF and EF automatically.
Question is: how can I know the order of Update operation in Database?
I need to know this because I have triggers on those tables and need to know the order of the updating.
One thing you can do is to override the PeristChangeSet() in your DomainService and manually control the order of saves. Just do nothing in your regular update/insert statements. Here's some pseudocode for a saving a document exmmple to explain my answer:
[Insert]
public void InsertDocument(MyDocument objDocument) { }
[Update]
public void UpdateDocument(MyDocument objDocument) { }
protected override bool PersistChangeSet()
{
try {
// have to save document first to get its id....
MyDocument objDocumentBeingSaved = null;
foreach (ChangeSetEntry CSE in ChangeSet.ChangeSetEntries.Where(i => i.Entity is MyDocument)) {
var changedEntity = (MyDocument)CSE.Entity;
objDocumentBeingSaved = documentRepository.SaveDocument(changedEntity);
break; // only one doc
}
if (objDocumentBeingSaved == null)
throw new NullReferenceException("CreateDocumentDomainService.PersistChangeSet(): Error saving document information. Document is null in entity set.");
// save document assignments after saving document object
foreach (ChangeSetEntry CSE in ChangeSet.ChangeSetEntries.Where(i => i.Entity is DocumentAssignment)) {
var changedEntity = (DocumentAssignment)CSE.Entity;
changedEntity.DocumentId = objDocumentBeingSaved.Id;
changedEntity.Id = documentRepository.SaveDocumentAssignment(objDocumentBeingSaved, changedEntity);
}
// save line items after saving document assignments
foreach (ChangeSetEntry CSE in ChangeSet.ChangeSetEntries.Where(i => i.Entity is LineItem)) {
var changedEntity = (LineItem)CSE.Entity;
changedEntity.DocumentId = objDocumentBeingSaved.Id;
changedEntity.Id = documentRepository.SaveLineItem(objDocumentBeingSaved, changedEntity);
}
documentRepository.GenerateDocumentNumber(objDocumentBeingSaved.Id);
}
catch {
// ....
throw;
}
return false;
}

Salesforce error while makin a call to a web service api

I get the following error when I call the function add()
You have uncommitted work pending. Please commit or rollback before calling out
I call the getItems() to populate the drop down and then the add function to insert the selected item from the drop down
public PageReference add() {
insert technology;
return null;
}
public List<SelectOption> getItems() {
List<SelectOption> options = new List<SelectOption>();
List<Technology__c> AddedT=[SELECT Name FROM Technology__c];
HttpRequest req = new HttpRequest();
req.setMethod('GET');
req.setEndpoint('http://submit.toolsberry.com/sfdc/technologies');
Http http = new Http();
HTTPResponse res = http.send(req);
String response=res.getBody();
XmlStreamReader reader = new XmlStreamReader(response);
List<String> AllTech = new List<String>();
while(reader.hasNext()) {
if (reader.getEventType() == XmlTag.START_ELEMENT) {
if ('string' == reader.getLocalName()) {
while(reader.hasNext()) {
if (reader.getEventType() == XmlTag.END_ELEMENT) {
break;
} else if (reader.getEventType() == XmlTag.CHARACTERS) {
String tname = reader.getText();
AllTech.add(tname);
}
reader.next();
}
}
}
reader.next();
}
}
This is because you need to do all your DML AFTER you are done with any callouts, not before. So any insert/update/upsert or delete statements must follow any http.send(req); calls.
** Looks like your list is getting repopulated after you call the add() method, because your list resides in a getter method **
This is thread-specific and must occur in the sequence per any given thread. So, for example, when a user clicks a button with an action method, all DML statements in that call must follow any callouts that happen in the same thread. Same for a trigger or batch Apex.
Having a getter/setter somewhere that is updating data somehow can cause this. Eg:
public String someProperty
{
get
{
return [SELECT Name FROM CustomObject__c WHERE Id = :this.someId];
}
set(String s)
{
CustomObject__c c = [SELECT Name FROM CustomObject__C WHERE Id = :this.someId]
c.Name = s;
update c;
}
}
Also, never put a callout in a getter. Always put a callout in an explicit method that does it once and only once. Getters will get fired multiple times and callouts have strict limitations in Apex.

Resources