Updating entity in Google Appengine datastore with Go - google-app-engine

I am toying with GAE, Go and the datastore. I have the following structs:
type Coinflip struct {
Participants []*datastore.Key
Head string
Tail string
Done bool
}
type Participant struct {
Email string
Seen datastore.Time
}
(For those wondering I store Participants as a slice off Key pointers because Go doesn't automatically dereferences entities.)
Now I want to find a Participant with a particular Email address associated with a know Coinflip. Like so (this works):
coinflip, _ := find(key_as_string, context)
participants, _ := coinflip.fetchParticipants(context) /* a slice of Participant*/
var found *Participant
for i := 0; i < len(participants) && found == nil; i++ {
if participants[i].Email == r.FormValue("email") {
found = &participants[i]
}
}
(*found).Seen = datastore.SecondsToTime(time.Seconds())
How do I save *found to the datastore? I need the key apparently but the coupling between the Participant struct and the Key is very loose.
I'm unsure how to proceed from here. Do I need to return the keys as well from the fetchParticipants call? The Java and Python GAE implementation seem quite a bit simpler (just call put() on the object).
Thanks in advance,

Do I need to return the keys as well from the fetchParticipants call?
Yes. And then call "func Put(c appengine.Context, key *Key, src interface{}) (*Key, os.Error)"
The Java and Python GAE implementation seem quite a bit simpler (just
call put() on the object).
Probably a fair statement. The Go community has a very strong bias against "magic". In this case the Participant struct has two fields that you have declared. Adding the key to it in the background would be considered magic.

For interacting with data in Go, consider using our new library https://github.com/matryer/gae-records for an Active Record, data objects wrapper around the datastore. It sorts out a lot of the hassle for you.
For example, it supports:
// create a new model for 'People'
People := gaerecords.NewModel("People")
// create a new person
mat := People.New()
mat.
SetString("name", "Mat")
SetInt64("age", 28)
.Put()
// load person with ID 1
person, _ := People.Find(1)
// change some fields
person.SetInt64("age", 29).Put()
// load all People
peeps, _ := People.FindAll()
// delete mat
mat.Delete()
// delete user with ID 2
People.Delete(2)
// find the first three People by passing a func(*datastore.Query)
// to the FindByQuery method
firstThree, _ := People.FindByQuery(func(q *datastore.Query){
q.Limit(3)
})
// build your own query and use that
var ageQuery *datastore.Query = People.NewQuery().
Limit(3).Order("-age")
// use FindByQuery with a query object
oldestThreePeople, _ := People.FindByQuery(ageQuery)
// using events, make sure 'People' records always get
// an 'updatedAt' value set before being put (created and updated)
People.BeforePut.On(func(c *gaerecords.EventContext){
person := c.Args[0].(*Record)
person.SetTime("updatedAt", datastore.SecondsToTime(time.Seconds()))
})

Related

Dynamic Variables in loop

I'm trying to figure out what the best approach would be to handle variables when looping through a slice.
I have the following bit of code:
type Server struct {
Name string
Features []string
}
func main() {
var server1 Server
server1.Name = "server-abc-1"
server1.Features = append(server1.Features, "feature1", "feature2", "feature3" )
subMenuServer1 := systray.AddMenuItem(server1.Name, "Server Menu")
//Manually adding a menu item
subMenuFeatureItem1 := subMenuServer1.AddSubMenuItem("feature1", "feature1 description")
//Creating menu items from looping through slice - Not working
for i, s := range server1.Features {
dynamicVariable := subMenuServer1.AddSubMenuItem(s, "test")
}
}
How would I loop through the features slice to dynamically create the menu items? The main problem is I need to generate a dynamic variable of some sort which is not supported in Go. Is there a better why to do things?
I've found people mentioning using hash table instead of dynamic variables. If so what type does the hash table would I need if the value is going to be "subMenuServer1.AddSubMenuItem(s, "test")"
Creating new variables dynamically is almost always a bad idea, even in languages that support it. Go doesn't support it, since it's a statically compiled language.
You're much better off using a map of type map[string]T, T being the type returned by AddSubMenuItem.

Is there any way to handle Google Datastore Kind Property names with spaces in Golang?

I'm running in a nasty issue with Datastore that doesn't appear to have any workaround.
I'm using the Google Appengine Datastore package to pull back projection query results into Appengine memory for manipulation, and that is being accomplished with representing each entity as a Struct, with each Struct field corresponding to a Property name, like so:
type Row struct {
Prop1 string
Prop2 int
}
This works great, but I've extended my queries into reading other property names that have spaces. While the query runs fine, it can't pull the data back into structs as it's looking to place the given value into a struct of the same naming convention, and I'm getting this kind of error:
datastore: cannot load field "Viewed Registration Page" into a "main.Row": no such struct field
Obviously Golang cannot represent a struct field like this. There is a field of the relevant type but no obvious way to tell the query to place it there.
What would be the best solution here?
Cheers
Actually Go supports mapping entity property names to different struct field names using tags (see this answer for details: What are the use(s) for tags in Go?).
For example:
type Row struct {
Prop1 string `datastore:"Prop1InDs"`
Prop2 int `datastore:"p2"`
}
But the Go implementation of the datastore package panics if you attempt to use a property name which contains a space.
To sum it up: you can't map property names having spaces to struct fields in Go (this is an implementation restriction which may change in the future).
But the good news is that you can still load these entities, just not into struct values.
You may load them into a variable of type datastore.PropertyList. datastore.PropertyList is basically a slice of datastore.Property, where Property is a struct which holds the name of the property, its value and other info.
This is how it can be done:
k := datastore.NewKey(ctx, "YourEntityName", "", 1, nil) // Create the key
e := datastore.PropertyList{}
if err := datastore.Get(ctx, k, &e); err != nil {
panic(err) // Handle error
}
// Now e holds your entity, let's list all its properties.
// PropertyList is a slice, so we can simply "range" over it:
for _, p := range e {
ctx.Infof("Property %q = %q", p.Name, p.Value)
}
If your entity has a property "Have space" with value "the_value", you will see for example:
2016-05-05 18:33:47,372 INFO: Property "Have space" = "the_value"
Note that you could implement the datastore.PropertyLoadSaver type on your struct and handle this under the hood, so basically you could still load such entities into struct values, but you have to implement this yourself.
But fight for entity names and property names to not have spaces. You will make your life harder and miserable if you allow these.
All programming languages that I know treat a space as the end of a variable/constant name. The obvious solution is to avoid using spaces in property names.
I would also note that a property name becomes a part of every entity and every index entry. I don't know if Google somehow compresses them, but I tend to use short property names in any case.
You can use annotations to rename properties.
From the docs:
// A and B are renamed to a and b.
// A, C and J are not indexed.
// D's tag is equivalent to having no tag at all (E).
// I is ignored entirely by the datastore.
// J has tag information for both the datastore and json packages.
type TaggedStruct struct {
A int `datastore:"a,noindex"`
B int `datastore:"b"`
C int `datastore:",noindex"`
D int `datastore:""`
E int
I int `datastore:"-"`
J int `datastore:",noindex" json:"j"`
}

System.TypeException: Cannot have more than 10 chunks in a single operation

I have this very weird error, "System.TypeException: Cannot have more than 10 chunks in a single operation", has anyone seen/encountered this before ? Please can you guide me if you know how to solve this.
I am trying to insert different types of sObjects together in a list of sObject. The list is never larger than 10 rows.
This post here:
https://developer.salesforce.com/forums/ForumsMain?id=906F000000090nUIAQ
suggests that it is not the number of different sObjects, but the order of the objects that causes this chunk limit to be exceeded. In other words, "1,1,1,2,2,2" has one chunk, the transition from "1" to "2". "1,2,3,4,5,6" has six chunks, even though the number of elements is the same. Putting the objects into the list sorted in object order is the suggested solution.
Is it possible for you to create a reasonable test case with only 2 or 3 rows?
There are two possible explanations for this issue:
As Jagular noted, you did not order the sobjects you tried to insert, so there are more than 10 'chunks' in the list.
You try to insert > 2000 records, and > 1 sobject type. This one seems like a Salesforce Bug, since the error message doesn't match the issue.
Scenario 1 and its solution
When you have a hybrid list, make sure that the objects are not scattered without any order. For example, A,B,A,B,A,B,A,B…. Salesforce has an inherent trouble in switching sObject types for more than 10 times. They call this switching limit as Chunking Limit. So, on this hybrid list, if you would have sorted it and then passed it for DML, Salesforce would have been much happier. For example. A,A,A,A,B,B,B,B… In this case, salesforce only has to switch one time (that is read all A objects –>switch –> read all B objects). The max chunk limit is 10. So, here we are safe.
listToUpdate.sort();
UPDATE listToUpdate;
Scenario 2 and its solution
Another point that we have to bear in our mind is that if the hybrid list contains more number of objects for one type, we can run into TypeException. As mentioned in the screenshot, if list contains 1001 objects of type A and 1001 objects of type B, then total objects is equal to 2002. The maximum chunks allowed is 10. So, if you do a simple math, the number of objects in each chunk would be 2002/10 = 200. Salesforce also enforces another governor limit that each chunk should not contain 200 or more than 200 objects. In this case, we will have to foresee how much objects are possible to enter this code and we have to write code to pass lists of safe size for DML every time.
Scenario 3 and its solution
Scenario 3 and its solution
Third scenario that can happen is if the hybrid list contains objects of more than 10 types, then even if the size of the list if very small, switching happens when salesforce reads different sObject. So, we have to make sure that in this case, we allot separate lists for each sObject type and then pass it on for DML. Doing this in an apex trigger or apex class would cause you some trouble as multiple DML’s are initiated in a context. Passing this kind of multiple sObject lists for DML operations in a different context would really ease up the load you pump into the platform. Consider doing this kind of logic in a Batch Apex Job rather than a apex trigger or apex class.
Hope this helps.
Below a code that should cover all 3 Scenario from Arpit Sethi.
It's a piece of code I took from this topic: https://developer.salesforce.com/forums/?id=906F000000090nUIAQ.
and modified to cover Scenario 2.
private static void saveSobjectSet(List <Sobject> listToUpdate) {
Integer SFDC_CHUNK_LIMIT = 10;
// Developed this part due to System.TypeException: Cannot have more than 10 chunks in a single operation
Map<String, List<Sobject>> sortedMapPerObjectType = new Map<String, List<Sobject>>();
Map<String, Integer> numberOf200ChunkPerObject = new Map<String, Integer>();
for (Sobject obj : listToUpdate) {
String objTypeREAL = String.valueOf(obj.getSObjectType());
if (! numberOf200ChunkPerObject.containsKey(objTypeREAL)){
numberOf200ChunkPerObject.put(objTypeREAL, 1);
}
// Number of 200 chunk for a given Object
Integer numnberOf200Record = numberOf200ChunkPerObject.get(objTypeREAL);
// Object type + number of 200 records chunk
String objTypeCURRENT = String.valueOf(obj.getSObjectType()) + String.valueOf(numnberOf200Record);
// CurrentList
List<sObject> currentList = sortedMapPerObjectType.get(objTypeCURRENT);
if (currentList == null || currentList.size() > 199) {
if(currentList != null && currentList.size() > 199){
numberOf200ChunkPerObject.put(objTypeREAL, numnberOf200Record + 1);
objTypeCURRENT = String.valueOf(obj.getSObjectType()) + String.valueOf(numnberOf200Record);
}
sortedMapPerObjectType.put(objTypeCURRENT, new List<Sobject>());
}
sortedMapPerObjectType.get(objTypeCURRENT).add(obj);
}
while(sortedMapPerObjectType.size() > 0) {
// Create a new list, which can contain a max of chunking limit, and sorted, so we don't get any errors
List<Sobject> safeListForChunking = new List<Sobject>();
List<String> keyListSobjectType = new List<String>(sortedMapPerObjectType.keySet());
for (Integer i = 0;i<SFDC_CHUNK_LIMIT && !sortedMapPerObjectType.isEmpty();i++) {
List<Sobject> listSobjectOfOneType = sortedMapPerObjectType.remove(keyListSobjectType.remove(0));
safeListForChunking.addAll(listSobjectOfOneType);
}
update safeListForChunking;
}
}
Hope it helps,
Bye
Hi i kind of deviced a simple way to sort a list of different sobject types
public List<Sobject> SortRecordsByType(List<Sobject> records){
List<Sobject> response = new List<Sobject>();
Map<string,List<Sobject>> sortDictionary = new Map<string,List<Sobject>>();
for(Sobject record : records){
string objectTypeName = record.getSobjectType().getDescribe().getName();
if(sortDictionary.containsKey(objectTypeName)){
sortDictionary.get(objectTypeName).add(record);
}else{
sortDictionary.put(objectTypeName , new List<Sobject>{record});
}
}
// arrange in order
for(string objectName : sortDictionary.keySet()){
response.addAll(sortDictionary.get(objectName));
}
return response;
}
hopefully this solves your problem .

Store an object in memcache of GAE in Go

I want to store an object in GAE's memcache using Go. The gae documentation only shows how to store a []byte here: https://developers.google.com/appengine/docs/go/memcache/overview
Of course there are general ways to serialize an object into []byte, by which my task could be accomplished. But by reading the memcache reference, I found there is an "Object" in the memcache Item:
// Object is the Item's value for use with a Codec.
Object interface{}
That seems to be a built-in mechanic to store an object in memcache. However, the gae documentation did not provide a sample code.
Could anyone please show me an example? Thanks in advance
OK, I just figured it out my self. The memcache pkg has two built-in Codec: gob and json. Just use one of them (or of course one can create his own Codec):
var in, out struct {I int;}
// Put in into memcache
in.I = 100
item := &memcache.Item {
Key: "TestKey",
Object: in,
}
memcache.Gob.Set(c, item) // error checking omitted for convenience
// retrieve the value
memcache.Gob.Get(c, "TestKey", &out)
fmt.Fprint(w, out) // will print {100}
Thanks all

Display a (Synopse) SQLite3 table-column in a Delphi7 TListView

I would like to take the following unit (DrivesData) and display the drive column in a TListView. I've never worked with the (Synopse) SQLite3 code before so I'm hoping someone could give me a little push in the right direction.
Just add the DrivesData unit to the uses clause then run and it will create the "drives.sqlite" database file with a list of drives 'A' to 'Z'.
unit DrivesData;
interface
uses
SynCommons, SQLite3Commons;
type
TDrives = class(TSQLRecord)
private
{ Private declarations }
FDrive: RawUTF8;
protected
{ Protected declarations }
FDrivesModel: TSQLModel;
FDrivesDatabase: TSQLRest;
public
{ Public declarations }
constructor Create(); override;
destructor Destroy(); override;
published
{ Published declarations }
property Drive: RawUTF8 read FDrive write FDrive;
end;
var
DriveRecord: TDrives;
implementation
uses
SQLite3;
function CreateDrivesModel(): TSQLModel;
begin
Result := TSQLModel.Create([TDrives]);
end;
{ TDrives }
constructor TDrives.Create();
var
X: Char;
begin
inherited Create();
FDrivesModel := CreateDrivesModel();
FDrivesDatabase := TSQLRestServerDB.Create(FDrivesModel, 'drives.sqlite');
TSQLRestServerDB(FDrivesDatabase).DB.Execute(
'CREATE TABLE IF NOT EXISTS drives ' +
'(id INTEGER PRIMARY KEY, drive TEXT NOT NULL UNIQUE COLLATE NOCASE);');
for X := 'A' to 'Z' do
begin
TSQLRestServerDB(FDrivesDatabase).DB.Execute(
'INSERT OR IGNORE INTO drives (drive) VALUES ("' + X + ':")');
end;
end;
destructor TDrives.Destroy();
begin
if Assigned(FDrivesDatabase) then
FDrivesDatabase.Free();
if Assigned(FDrivesModel) then
FDrivesModel.Free();
inherited Destroy();
end;
initialization
DriveRecord := TDrives.Create();
finalization
if Assigned(DriveRecord) then
DriveRecord.Free();
end.
Nice try!
But I'm afraid you are missing some points of the framework:
for instance you're mixing record level and MVC application level: a TSQLRecord maps a DB table and you should not declare MVC TSQLModel and TSQLRest inside this class;
and you're missing the ORM approach, you don't need to write all those SQL code (the CREATE TABLE and the INSERT): the framework will write it for you, with no error, and the exact expected column type (with collations)!
Instead of using a TSQLRestServerDB directly by itself, you should better use a TSQLRestClientDB (which will instantiate its privately owned TSQLRestServerDB), even if you are still working locally. So you'll get a lot more features with no performance penalty.
You are using a Char type in your code. Our framework is UTF-8 oriented, so you should use AnsiChar instead, or use StringToUtf8() function to ensure correctness (at least with Unicode version of Delphi).
I'll recommend that you take a look at the sample code source code and the provided documentation (especially the SAD document, in the general presentation in the first pages, including the SynFile main demo).
To retrieve some data, then display it in the VCL (e.g. in a TListBox), take a look at the TSQLTableJSON class. There are some code sample in the SAD document (take a look at the keyword index, at the beginning of the document, if you're a bit lost).
Perhaps StackOverflow is not the best place to ask such specific questions. You have our forum available at http://synopse.info to post any questions regarding this framework. You can post your code here.
Thanks for your interest!

Resources