Manually assign value to a hibernate UUID - database

As we know, in hibernate, configure the generator of a id to "uuid" , then hibernate will auto generate a UUID value to the id field when saving a new object.If configuring the generator to "assigned", the id must be assigned a value before saving a object.
And I found that if configuring the generator to uuid and assigning the id a value manually, the hibernate will change the value to a new UUID one.
My question is, when the generator is configured as uuid, how to manually assign a value to it?
PS: I use spring HibernateDaoSupport to save.
org.springframework.orm.hibernate3.support.HibernateDaoSupport.save(Ojbect obj)
Thanks!

If you need it only in rare special cases, the simpliest way is to issue INSERT queries in native SQL instead of using save().
Alternatively, you can customize generator to achieve the desired behaviour:
public class FallbackUUIDHexGenerator extends UUIDHexGenerator {
private String entityName;
#Override
public void configure(Type type, Properties params, Dialect d)
throws MappingException {
entityName = params.getProperty(ENTITY_NAME);
super.configure(type, params, d);
}
#Override
public Serializable generate(SessionImplementor session, Object object)
throws HibernateException {
Serializable id = session
.getEntityPersister(entityName, object)
.getIdentifier(object, session);
if (id == null)
return super.generate(session, object);
else
return id;
}
}
and configure Hibernate to use it by setting its fully qualified name as strategy.

Related

Error "The ID `1` has an invalid format" when querying HotChocolate

Tried to make own project, looking to ChilliCream/graphql-workshop as example.
There is a part, where id parameter of a query marked with IDAttribute.
Description of ID type says following:
The ID scalar type represents a unique identifier, often used to
refetch an object or as key for a cache. The ID type appears in a JSON
response as a String; however, it is not intended to be
human-readable. When expected as an input type, any string (such as
"4") or integer (such as 4) input value will be accepted as an ID.
My C# query source looks like
[ExtendObjectType(Name = GraphqlQueryNames.Query)]
public class EmployeeQuery
{
public async Task<Employee> GetEmployeeByIdAsync(
[ID] int id,
[Service] IEmployeeRepository employeeRepository,
CancellationToken token)
{
return await employeeRepository.GetEmployeeByIdAsync(id, token);
}
}
And in playground:
# 1 passed as value of $id
query getEmployeeById($id: ID!) {
employeeById(id: $id) {
familyName
}
}
Whether value is a string or a number, server throws same error "The ID `1` has an invalid format".
If we remove the [ID] attribute from C# and use it as 'Int!' in GraphQL query, it works fine.
What's wrong with ID and why it exists in example (AttendeeQueries.cs)? HotChocolate 10.5.3
First, the ID scalar type is part of GraphQL standard, and defined as:
In GraphQL the ID scalar type represents a unique identifier, often
used to refetch an object or as the key for a cache. The ID type is
serialized in the same way as a String; however, defining it as an ID
signifies that it is not intended to be human‐readable.
Relay is a GraphQL client JavaScript framework for React applications. Apollo GraphQL is another alternative.
HotChocolate has a few helpers to enable "Relay-style GraphQL API". These helpers will convert the id-field to a base64-encoded hash, serialized as a string. This is a good thing because:
it helps with caching, all entities has a unique id
hide actual id from users (?)
Even though you enable "Relay support" in HotChocolate, you don't have to use Relay, you can still use any GraphQL client (Apollo client is my favourite)
Now, if you just want to use the GraphQL ID scalar type, you can try this as I suggested first:
First remove ID-attribute from query:
[ExtendObjectType(Name = GraphqlQueryNames.Query)]
public class EmployeeQuery
{
public async Task<Employee> GetEmployeeByIdAsync(
int id,
[Service] IEmployeeRepository employeeRepository,
CancellationToken token)
{
return await employeeRepository.GetEmployeeByIdAsync(id, token);
}
}
then specify ID scalar type like this:
public class EmployeeType : ObjectType<Employee>
{
protected override void Configure(IObjectTypeDescriptor<Employee> descriptor)
{
descriptor.Field(r => r.Id).Type<IdType>;
}
}
But if you want to enable "Relay support" in HotChocolate, follow Arsync's answer (I changed this to HotChocolate v11 which has slightly different syntax):
public class Startup
{
public void ConfigureServices(IServiceCollection services)
{
services.AddGraphQLServer().EnableRelaySupport();
}
}
Update for Hot Chocolate v12: It's now possible to enable global identification, but without other relay related stuff:
public class Startup
{
public void ConfigureServices(IServiceCollection services)
{
services.AddGraphQLServer().AddGlobalObjectIdentification();
}
}
Then in your type definition:
public class EmployeeType : ObjectType<Employee>
{
protected override void Configure(IObjectTypeDescriptor<Employee> descriptor)
{
// Relay ID support. Retailer.Id will be a hash. the real id / int is available below when passed to DataLoader
descriptor
.ImplementsNode()
.IdField(c => c.Id)
.ResolveNode(((context, id) => context.DataLoader<EmployeeByIdDataLoader>().LoadAsync(id, context.RequestAborted)));
}
}
Or even simpler with v12:
public class EmployeeType : ObjectType<Employee>
{
protected override void Configure(IObjectTypeDescriptor<Employee> descriptor)
{
descriptor
.Field(f => f.Id).ID(nameof(Employee));
}
}
If you try to query for employees now you'll see that id is not an integer, but a hash. Something like "TGFuZ3VhZ2UKaTE=". This hash is generated by HotChocolate, see IdSerializer source. If you try to base64 decode this string:
$ echo "TGFuZ3VhZ2UKaTE=" | base64 -d
Employee
i1
The errormessage you received "The ID 1 has invalid format", is because it now expects a hashed string, and not an integer.
This query should work:
query getEmployeeById {
employeeById(id: "TGFuZ3VhZ2UKaTE=") {
familyName
}
}
Found that IDAttribute is for Relay (as it located in HotChocolate.Types.Relay namespace). So need enable and configure Relay support (source):
ISchema schema = SchemaBuilder.New()
.EnableRelaySupport()
...
.Create();
And in ObjectType:
public class MyObjectType
: ObjectType<MyObject>
{
protected override void Configure(IObjectTypeDescriptor<MyObject> descriptor)
{
descriptor.AsNode()
.IdField(t => t.Id)
.NodeResolver((ctx, id) =>
ctx.Service<IMyRepository>().GetMyObjectAsync(id));
...
}
}
Seems the example project graphql-workshop needs more in-place explanation of purpose for these things. Can be found here.

Put Text type to datastore with Objectify 6

I'm currently migrating project's DAO classes from JDO implementation to Objectify V6.
The requirement I have, is to make sure that in a case of rollback it will be possible to load entities, which were saved by Objectify, with the old version of DAO.
In old code strings are stored as Text. And if I leave Text field in entity definition, Objectify puts it as a String to datastore (because there is no Text type any more).
Currently new DAO implementation is not backward compatible because of a ClassCastException which arise when JDO implementation casts String to Text type.
Is there a way to store Text type to datastore with Objectify V6?
I tried to use String instead of Text in entity definition and create a TranslatorFactory to make the conversion, but I wasn't able to find correct datastore Value implementation type.
public class StringTextTranslatorFactory implements TranslatorFactory<String, Text> {
#Override
public Translator<String, Text> create(TypeKey<String> tk, CreateContext ctx, Path path) {
return new Translator<String, Text>() {
#Override
public String load(Value<Text> node, LoadContext ctx, Path path) throws SkipException {
Text text = node.get();
return text != null ? text.getValue() : "";
}
#Override
public Value<Text> save(String pojo, boolean index, SaveContext ctx, Path path)
throws SkipException {
return ???;
}
};
}
}
Update
The project is using an implementation of JDO 2.3 for the App Engine Datastore. The implementation is based on version 1.0 of the DataNucleus Access Platform.
Data entity defined as the following:
#PersistenceCapable(identityType = IdentityType.APPLICATION)
public class CrmNote {
#PrimaryKey
#Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY)
private Long id;
#Persistent
private Text note;
}
Stacktrace:
java.lang.ClassCastException: java.lang.String cannot be cast to com.google.appengine.api.datastore.Text
at com.timzon.snapabug.server.data.CrmNote.jdoReplaceField(CrmNote.java)
at com.timzon.snapabug.server.data.CrmNote.jdoReplaceFields(CrmNote.java)
at org.datanucleus.state.JDOStateManagerImpl.replaceFields(JDOStateManagerImpl.java:2772)
at org.datanucleus.state.JDOStateManagerImpl.replaceFields(JDOStateManagerImpl.java:2791)
at org.datanucleus.store.appengine.DatastorePersistenceHandler.fetchObject(DatastorePersistenceHandler.java:519)
at org.datanucleus.store.appengine.query.DatastoreQuery.entityToPojo(DatastoreQuery.java:649)
at org.datanucleus.store.appengine.query.DatastoreQuery.entityToPojo(DatastoreQuery.java:603)
at org.datanucleus.store.appengine.query.DatastoreQuery.access$300(DatastoreQuery.java:119)
at org.datanucleus.store.appengine.query.DatastoreQuery$6.apply(DatastoreQuery.java:783)
at org.datanucleus.store.appengine.query.DatastoreQuery$6.apply(DatastoreQuery.java:774)
at org.datanucleus.store.appengine.query.LazyResult.resolveNext(LazyResult.java:94)
at org.datanucleus.store.appengine.query.LazyResult.resolveAll(LazyResult.java:116)
at org.datanucleus.store.appengine.query.LazyResult.size(LazyResult.java:110)
at org.datanucleus.store.appengine.query.StreamingQueryResult.size(StreamingQueryResult.java:130)
at org.datanucleus.store.query.AbstractQueryResult.toArray(AbstractQueryResult.java:399)
at java.util.ArrayList.<init>(ArrayList.java:178)
at com.timzon.snapabug.server.dao.CrmNoteDAO.getOrderedCrmNotes(CrmNoteDAO.java:27)
Exception happens in auto-generated jdoReplaceField method which is added by JDO post-compilation "enhancement". I decompiled enhanced class and I see that datastore object is casted to Text type directly:
public void jdoReplaceField(int index) {
if (this.jdoStateManager == null) {
throw new IllegalStateException("state manager is null");
} else {
switch(index) {
case 0:
this.id = (Long)this.jdoStateManager.replacingObjectField(this, index);
break;
case 1:
this.note = (Text)this.jdoStateManager.replacingObjectField(this, index);
break;
default:
throw new IllegalArgumentException("out of field index :" + index);
}
}
}
So, if note field is saved in data store as a String, then in case of rollback a ClassCastException will be thrown.
There's no way to explicitly store a Text type with the Google-provided SDK that Objectify 6 uses; there is only StringValue. Text is not even in the jar.
However, I don't think this should matter. Ultimately both SDKs (the old appengine one and the new one) are just converting back and forth to protobuf structures. They are supposed to be compatible.
It's especially strange because the old low level API wrote strings into the Entity structure; Text was required only if the strings exceeded a certain length. So JDO should handle String. Do you have some sort of special annotation on your String field to force it to expect Text? What does that stacktrace look like?

Limit c# model class parameter call to MS SQL server for non-existing table field

Working with database-first approach creating ASPNETCORE MVC web app with user authentication, I would like to override the way the parameters from IdentityUser class are queried to the database. The reason is the current implementation of IdentityUser has two new parameters NormalizedEmail and NormalizedUserName (which in my opinion retracts from Normalization).
Is there a way I can write the code below in the Model class so that those two parameters are not included in the query to the database or is that something that needs to be done in the controller class?
public class IdentityUser : Microsoft.AspNetCore.Identity.EntityFrameworkCore.IdentityUser
{
public override string NormalizedUserName
{ get { return null; } set { value = null; } }
public override string NormalizedEmail
{ get { return null; } set { value = null; } }
}
Not far as I can tell, both parameters are part of the data model and as explained in this Issue #351
About Identity 3.0:
...Instead we compute a normalized representation of the user name and we
store it in a separate column so that lookups by normalized user name
should now be sargable.
So in other words, if you "override the way the parameters from IdentityUser class are queried to the database" in essence you'll be doing exactly the opposite the class intends to do.

How to switch between DatabaseGeneratedOption.Identity, Computed and None at runtime without having to generate empty DbMigrations

I am migrating a legacy database to a new database which we need to access and "manage" (as oxymoronic as it might sound) primarily through Entity Framework Code-First.
We are using MS SQL Server 2014.
The legacy database contained some tables with computed columns. Typical GUID and DateTime stuff.
Technically speaking, these columns did not have a computed column specification, but rather where given a default value with NEWID() and GETDATE()
We all know that it is very easy to configure the DbContext to deal with those properties as follows:
modelBuilder.Entity<Foo>()
.Property(t => t.Guid)
.HasDatabaseGeneratedOption(DatabaseGeneratedOption.Computed);
modelBuilder.Entity<Bar>()
.Property(t => t.DTS)
.HasDatabaseGeneratedOption(DatabaseGeneratedOption.Computed);
The above would instruct the Entity Framework to ignore submitting any supplied values for such properties during INSERTs and UPDATEs.
But now we need to allow for import of legacy records and maintain the OLD values, including the PRIMARY KEY, which is marked as IDENTITY
This means we would have to set the Id, Guid and DTS properties to DatabaseGeneratedOption.None while inserting those records.
For the case of Id, we would have to somehow execute SET IDENTITY_INSERT ... ON/OFF within the connection session.
And we want to do this
importing process via Code-First as well.
If I modify the model and "temporarily" and set those properties to DatabaseGeneratedOption.None after the database has been created, we would get the typical:
The model backing the context has changed since the database was created. Consider using Code First Migrations to update the database.
I understand that we could generate an empty coded-migration with -IgnoreChanges so as to "establish" this latest version of the context, but this wouldn't be an acceptable strategy as we would have to be run empty migrations back-and-forth solely for this purpose.
Half an answer:
We have considered giving these properties nullable types, i.e.
public class Foo
{
...
public Guid? Guid { get; set; }
}
public class Bar
{
...
public DateTime? DTS { get; set; }
}
While caring about the default values in an initial DbMigration:
CreateTable(
"dbo.Foos",
c => new
{
Id = c.Int(nullable: false, identity: true),
Guid = c.Guid(nullable: false, defaultValueSql: "NEWID()"),
})
.PrimaryKey(t => t.Id);
CreateTable(
"dbo.Bars",
c => new
{
Id = c.Int(nullable: false, identity: true),
DTS = c.Guid(nullable: false, defaultValueSql: "GETDATE()"),
})
.PrimaryKey(t => t.Id);
The Question:
But the question remains: Is there a way to switch between DatabaseGeneratedOption.Identity, DatabaseGeneratedOption.Computed and DatabaseGeneratedOption.None at runtime?
At the very least, how could we turn DatabaseGeneratedOption.Identity on/off at runtime?
A certain amount of the configuration of the context is always going to be dependent on the runtime environment - for example, proxy generation and validation. As such, runtime configuration of the Entity Framework DbContext is something I leverage quite heavily.
Although I've never used this approach to switch the configuration of the context on a per use-case basis, I see no reason why this would not work.
In its simplest form, this can be achieved by having a set of EntityTypeConfiguration classes for each environment. Each configuration set is then wired to the DbContext on a per-environment basis. Again, in its simplest form this could be achieved by having a DbContext type per environment. In your case, this would be per use-case.
Less naively, I usually encapsulate the configuration of the context in an environment-specific unit of work. For example, the unit of work for an Asp.Net environment has an underlying DbContext configured to delegate validation to the web framework, as well as to turn off proxy generation to prevent serialisation issues. I imagine this approach would have similar usefulness to your problem.
For example (using brute force code):
// Foo Configuration which enforces computed columns
public class FooConfiguration : EntityTypeConfiguration<Foo>
{
public FooConfiguration()
{
Property(p => p.DateTime).HasDatabaseGeneratedOption(DatabaseGeneratedOption.Computed);
Property(p => p.Guid).HasDatabaseGeneratedOption(DatabaseGeneratedOption.Computed);
}
}
// Foo configuration that allows computed columns to be overridden
public class FooConfiguration2 : EntityTypeConfiguration<Foo>
{
public FooConfiguration2()
{
Property(p => p.DateTime).HasDatabaseGeneratedOption(DatabaseGeneratedOption.None);
Property(p => p.Guid).HasDatabaseGeneratedOption(DatabaseGeneratedOption.None);
}
}
// DbContext that enforces computed columns
public class MyContext : DbContext
{
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
modelBuilder.Configurations.Add(new FooConfiguration());
}
}
// DbContext that allows computed columns to be overridden
public class MyContext2 : DbContext
{
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
modelBuilder.Configurations.Add(new FooConfiguration2());
}
}
This can obviously be tidied up - we usually use a combination of factory and strategy patterns to encapsulate the creation of a runtime specific context. In combination with a DI container this allows the correct set up configuration classes to be injected on a per-environment basis.
Example usage:
[Fact]
public void CanConfigureContextAtRuntime()
{
// Enforce computed columns
using (var context = new EfContext())
{
var foo1 = new Foo();
context.Foos.Add(foo1);
context.SaveChanges();
}
// Allow overridden computed columns
using (var context = new EfContext2())
{
var foo2 = new Foo { DateTime = DateTime.Now.AddYears(-3) };
context.Foos.Add(foo2);
context.SaveChanges();
}
// etc
}

Add a new attribute to entity in datastore?

I have an entity in my app engine datastore. There's actually only one instance of this entity. I can see it in my admin console. Is it possible to add a new attribute to the entity via the admin console (using gql perhaps)?
Right now it looks something like:
Entity: Foo
Attributes: mName, mAge, mScore
and I'd like to add a new boolean attribute to this entity like "mGraduated" or something like that.
In the worst case I can write some code to delete the entity then save a new one, but yeah was just wondering.
Thanks
-------- Update ---------
Tried adding the new attribute to my class (using java) and upon loading from the datastore I get the following:
java.lang.NullPointerException:
Datastore entity with kind Foo and key Foo(\"Foo\") has a null property named mGraduated.
This property is mapped to com.me.types.Foo.mGraduated, which cannot accept null values.
This is what my entity class looks like, I just added the new attribute (mGraduated), then deployed, then tried loading the single entity from the datastore (which produced the above exception):
#PersistenceCapable
public class Foo
{
#PrimaryKey
private String k;
/** Some old attributes, look like the following. */
#Persistent
#Extension(vendorName = "datanucleus", key = "gae.unindexed", value="true")
private String mName;
...
/** Tried adding the new one. */
#Persistent
#Extension(vendorName = "datanucleus", key = "gae.unindexed", value="true")
private boolean mGraduated;
The only way to implement this is to use Boolean as the type for the new property..
Than in set method you can accept boolean value, that's no issue.
If you want the get method to also return boolean.. you also can, but be sure to check if the value is null and if so.. return default value (e.g. true)
so
private Boolean newProp = null; // can also assing default value .. e.g. true;
public void setNewProp(boolean val)
{
this.newProp = val;
}
public boolean getNewProp()
{
if(this.newProp == null)
return true; // Default value if not set
return this.newProp.booleanValue();
}
I recommend you not to migrate your data in this case - it can be very costly and can deplete your quota easily (read old data, create new, delete old = 3 operations for every entry in you data store)
You can't do this through the admin console, but you shouldn't have to delete the entity. Instead just update it- the Datastore does not enforce schemas for Kinds.
E.g., if Foo is a subclass of db.Model (Python), change your model subclass to include the new property; fetch the model instance (e.g., by its key), update the instance, including setting the value of the new field; and save the modified instance. Since you just have one instance this is easy. With many such instances to update you'd probably want to do this via task queue tasks or via a mapreduce job.
You have declared the new mGraduated field using the primitive type boolean, which cannot be null. The existing entity can't be loaded into the model class because it doesn't have this property. One option is to declare this property using the Boolean class, which can accept a null value.
The Admin Console only knows about properties in existing entities. You cannot use the Admin Console directly to create a new property with a name not used by any existing entities. (This is just a limitation of the Console. App code can do this easily.)

Resources