First I create a index.yaml
- kind: Tarifa 2014
ancestor: yes
properties:
- name: Date
direction: desc
- kind: Tarifa 2014
ancestor: yes
properties:
- name: Division
- name: Heat
- name: Date
direction: desc
Then I put some data in
key := datastore.NewKey(s.Context, "Tarifa 2014", "", 0, s.Root)
key, err = datastore.Put(s.Context, key, m)
Simple queries work,
key := datastore.NewKey(s.Context, "Tarifa 2014", "", id, s.Root)
err = datastore.Get(s.Context, key, &m)
but this does do not because my index is still empty?
datastore.NewQuery(e).Ancestor(s.Root).Filter("Division =", d).Filter("Heat =", h).Order("-Date")
Same for this, it also does not work?
datastore.NewQuery(e).Ancestor(s.Root).Order("-Date")
My index looks like this on appspot.com?
My datastore looks like this on appspot.com
Note that on localhost:8080 all queries work fine?
For a reason that is not clear, if I use Namespace , indexed queries break on appspot.com
c2 := endpoints.NewContext(r)
c, err := appengine.Namespace(c2, "")
if err != nil {return err}
You need to use endpoints context directly without a namespace.
c := endpoints.NewContext(r)
Related
I have a Ruby on Rails application to enter results and create a league table for a football competition.
I'm trying to input some results by creating records in the database through heroku and I get error messages.
The application isn't perfectly designed: to enter the results, I have to create the fixtures and enter the score for each team. Then, independently I have to record each goal scorer, creating a record for each goal which is either associated with an existing player or requires me to firstly create a new player and then create the goal.
When I ran the code below heroku, I got this error:
syntax error, unexpected ':', expecting keyword_end
Maybe I'm missing something simple about lopping through an array within a hash?
Thank you for any advice!
coalition = Team.find_by(name: "Coalition")
moscow_rebels = Team.find_by(name: "Moscow Rebels")
red_star = Team.find_by(name: "Red Star")
unsanctionables = Team.find_by(name: "The Unsanctionables")
cavalry = Team.find_by(name: "Cavalry")
galactics = Team.find_by(name: "The Galactics")
happy_sundays = Team.find_by(name: "Happy Sundays")
hardmen = Team.find_by(name: "Hardmen")
international = Team.find_by(name: "International")
evropa = Venue.find_by(name: "Evropa")
s28 = Season.find_by(number: 28)
start_time = DateTime.new(2020,9,6,11,0,0,'+03:00')
scheduled_matches_1 =
[
{team_1: cavalry, team_1_goals: 1, team_1_scorers: ["Minaev"], team_2_goals: 6, team_2_scorers: ["Kovalev", "Kovalev", "Kovalev", "Thomas", "Thomas", "Grivachev"], team_2: coalition, time: start_time, venue: evropa, season: s28},
{team_1: hardmen, team_1_goals: 4, team_1_scorers: ["Jones", "Jones", "Jones", "Fusi"], team_2_goals: 2, team_2_scorers: ["Kazamula", "Ario"], team_2: galactics, time: start_time + 1.hour, venue: evropa, season: s28},
{team_1: international, team_1_goals: 9, team_1_scorers: ["Kimonnen", "Kimonnen", "Kimonnen", "Burya", "Burya", "Zakharyaev", "Zakharyaev", "Lavruk", "Rihter"], team_2_goals: 0, team_2_scorers: [], team_2: happy_sundays, time: start_time+2.hours, venue: evropa, season: s28}
]
scheduled_matches.each do |match|
new_fixture = Fixture.create(time: match[:time], venue: match[:venue], season: match[:season])
tf1 = TeamFixture.create(team: match[:team_1], fixture: new_fixture)
tf2 = TeamFixture.create(team: match[:team_2], fixture: new_fixture)
ts1 = TeamScore.create(team_fixture: tf1, total_goals: match{:team_1_goals})
ts2 = TeamScore.create(team_fixture: tf2, total_goals: match{:team_2_goals})
match[:team_1_scorers].each do |scorer|
if Player.exists?(team: tf1.team, last_name: scorer)
Goal.create(team_score: ts1, player: Player.find_by(last_name: scorer))
else
new_player = Player.create(team: tf1.team, last_name: scorer)
Goal.create(team_score: ts1, player: new_player)
end
end
match[:team_2_scorers].each do |scorer_2|
if Player.exists?(team: tf2.team, last_name: scorer_2)
Goal.create(team_score: ts2, player: Player.find_by(last_name: scorer_2))
else
new_player = Player.create(team: tf2.team, last_name: scorer_2)
Goal.create(team_score: ts2, player: new_player)
end
end
end
It looks like you are using braces when you meant to use brackets to access the hash. Below is one of the issues, but the same issue is in ts2.
ts1 = TeamScore.create(team_fixture: tf1, total_goals: match{:team_1_goals})
should be match[:team_1_goals]
ts1 = TeamScore.create(team_fixture: tf1, total_goals: match[:team_1_goals])
It may be because you have scheduled_matches_1 at the top and scheduled_matches.each do... further down.
But the real issue here is that your variable names match the data content, rather than being used to hold the content. If a new team joins your league, you have to change the code. Next week, you are going to have to change the hard-coded date value. Your scheduled_matches_1 data structure includes the active record objects returned by the first set of Team.findByName() calls. It would be easier to fetch these objects from the database inside your loops, and just hold the team name as a string in the hash.
There is some duplication too. Consider that each fixture has a home team and an away team. Each team has a name, and an array (possibly empty) of the players who scored. We don't need the number of goals; we can just count the number of players in the 'scorers' array. The other attributes, like the location and season belong to the fixture, not the team. So your hash might be better as
{
"fixtures": [
{
"home": {
"name": "Cavalry",
"scorers": [
"Minaev"
]
},
"away": {
"name": "Coalition",
"scorers": [
"Kovalev",
"Kovalev",
"Kovalev",
"Thomas",
"Thomas",
"Grivachev"
]
},
"venue": "Evropa",
"season": "s28"
}
]
}
because then you can create a reusable method to process each team. And maybe create a new method that returns the player (which it either finds or creates) which can be called by the loop that adds the goals.
Also, as it stands, I'm not sure the code can handle 'own goals', either. Perhaps something for a future iteration :)
I have what appears to be a strange bug in either the gocql driver for Cassandra, or in the Cassandra database itself.
I am trying to do a simple write and then read all request in two separate functions. I would expect that I would get all entries on the read all request, but I am only getting the last entry in Cassandra.
Here is how I am doing the write:
util.CassSession, _ = util.CassCluster.CreateSession()
defer util.CassSession.Close()
keySpaceMeta, _ := util.CassSession.KeyspaceMetadata("platypus")
valC, exists := keySpaceMeta.Tables["cassmessage"]
if exists==true {
fmt.Println("cassmessage exists!!!")
}else{
fmt.Println("cassmessage doesnt exist!")
}
if valC!=nil{
fmt.Println("return from valC cassmessage: ", valC)
}
insertString:=`INSERT INTO cassmessage
(messagefrom, messageto, messagecontent)
VALUES('`+sendMsgReq.MessageFrom+`', '`
+sendMsgReq.MessageTo+`', '`+sendMsgReq.MessageContent+`')`
fmt.Println("insertString value: ", insertString)
err := util.CassSession.Query(insertString).Exec()
if err != nil {
fmt.Println("there was an error in appending data to cassmessage: ", err)
} else {
fmt.Println("inserted data into cassmessage successfully")
}
the terminal output from the above:
app_1 | [17:59:43][WEBSERVER] : cassmessage exists!!!
app_1 | [17:59:43][WEBSERVER] : return from valC cassmessage:
&{platypus cassmessage [] []
[0xc000400140] [] map[messagefrom:0xc0004000a0
messageto:0xc000400140 messagecontent:0xc000400000]
[messagecontent messagefrom messageto]}
app_1 | [17:59:43][WEBSERVER] : inserted data into cassmessage successfully
I am not entirely sure what the output of valC is returning, although it appears to be some sort of memory address which is a good sign. I also see that I am not getting any error on the write exec function which is hopeful.
Here is how I am doing the read:
util.CassSession, _ = util.CassCluster.CreateSession()
defer util.CassSession.Close()
keySpaceMeta, _ := util.CassSession.KeyspaceMetadata("platypus")
valC, exists := keySpaceMeta.Tables["cassmessage"]
queryString := `SELECT messageto, messagecontent, messagefrom FROM cassmessage WHERE messagefrom='`+mailReq.Email+`'`
//returns nothing, should return many rows
queryString2 := `SELECT messageto, messagecontent, messagefrom FROM cassmessage`
//returns only last entry, should return many rows
queryString3 := `SELECT * FROM cassmessage WHERE messagefrom='`+mailReq.Email+`'`
//returns nothing, should return many rows
queryAllString := `SELECT * FROM cassmessage`
//returns only last entry, should return many rows
var messageto string
var messagecontent string
var messagefrom string
iter := util.CassSession.Query(queryAllString).Iter()
for iter.Scan(&messageto, &messagecontent, &messagefrom) {
fmt.Println("Iter messageto: %v", messageto)
fmt.Println("Iter messagecontent: %v", messagecontent)
fmt.Println("Iter messagefrom: %v", messagefrom)
}
the terminal output from above:
app_1 | [18:09:54][WEBSERVER] : Iter messageto: %v xyz#xyz.com
app_1 | [18:09:54][WEBSERVER] : Iter messagecontent: %v a
app_1 | [18:09:54][WEBSERVER] : Iter messagefrom: %v abc#abc.com
This is not what I expect, as this is the output from the read, after multiple writes to the database. If you look at comments on the various queryString values I have tried 2 of them return nothing when I expect all entries to be returned, and 2 of them only return the last write entry (they are all symmetric queries to my knowledge).
Does anyone know why I cannot return multiple entries using Iter or why my four different values on the different query strings I have tried are returning different results?
Thank you.
I maybe shouldn't, but I'm going to keep this here in case someone else runs into the same problem. I wasn't making sure that my primary key in my table was unique. Doing something like this:
util.CassSession.Query("CREATE TABLE cassmessage(" +
"messageto text, messagefrom text, messagecontent text, uniqueID text, PRIMARY KEY (uniqueID))").Exec()
Managed to fix the issue.
Thanks to everyone who took a look and helped. Cheers!
I was learning Go and Mongodb, currently using the alpha official mongodb driver. Although it is in alpha, it is quite functional for basic usage I think.
But I got an interesting issue on time conversion in this db driver.
Basically, I created a custom typed struct object, and marshaled it to bson document, and then convert the bson document back to struct object.
//check github.com/mongodb/mongo-go-driver/blob/master/bson/marshal_test.go
func TestUserStructToBsonAndBackwards(t *testing.T) {
u := user{
Username: "test_bson_username",
Password: "1234",
UserAccessibility: "normal",
RegisterationTime: time.Now(), //.Format(time.RFC3339), adding format result a string
}
//Struct To Bson
bsonByteArray, err := bson.Marshal(u)
if err != nil {
t.Error(err)
}
//.UnmarshalDocument is the same as ReadDocument
bDoc, err := bson.UnmarshalDocument(bsonByteArray)
if err != nil {
t.Error(err)
}
unameFromBson, err := bDoc.LookupErr("username")
//so here the binding is working for bson object too, the bind field named username ratherthan Username
if err != nil {
t.Error(err)
}
if unameFromBson.StringValue() != "test_bson_username" {
t.Error("bson from user struct Error")
}
//Bson Doc to User struct
bsonByteArrayFromDoc, err := bDoc.MarshalBSON()
if err != nil {
t.Error(err)
}
var newU user
err = bson.Unmarshal(bsonByteArrayFromDoc, &newU)
if err != nil {
t.Error(err)
}
if newU.Username != u.Username {
t.Error("bson Doc to user struct Error")
}
//here we have an issue about time format.
if newU != u {
log.Println(newU)
log.Println(u)
t.Error("bson Doc to user struct time Error")
}
}
However since my struct object has a time field, the result struct object contains a less accurate time value than the original. Then the comparison is failed.
=== RUN TestUserStructToBsonAndBackwards
{test_bson_username 1234 0001-01-01 00:00:00 +0000 UTC 2018-08-28 23:56:50.006 +0800 CST 0001-01-01 00:00:00 +0000 UTC normal }
{test_bson_username 1234 0001-01-01 00:00:00 +0000 UTC 2018-08-28 23:56:50.006395949 +0800 CST m=+0.111119920 0001-01-01 00:00:00 +0000 UTC normal }
--- FAIL: TestUserStructToBsonAndBackwards (0.00s)
model.user_test.go:67: bson Doc to user struct time Error
So I would like to ask many questions from this.
How to compare time properly in this case ?
What's the best way to store time in database to avoid such precision issue ? I think the time in database should not be a string.
is this a db driver bug ?
Times in BSON are represented as UTC milliseconds since the Unix epoch (spec). Time values in Go have nanosecond precision.
To round trip time.Time values through BSON marshalling, use times truncated to milliseconds since the Unix epoch:
func truncate(t time.Time) time.Time {
return time.Unix(0, t.UnixNano()/1e6*1e6)
}
...
u := user{
Username: "test_bson_username",
Password: "1234",
UserAccessibility: "normal",
RegisterationTime: truncate(time.Now()),
}
You can also use the Time.Truncate method:
u := user{
Username: "test_bson_username",
Password: "1234",
UserAccessibility: "normal",
RegisterationTime: time.Now().Truncate(time.Millisecond),
}
This approach relies on the fact that Unix epoch and Go zero time differ by a whole number of milliseconds.
You've correctly identified that the issue is one of precision.
MongoDB's Date type is "a 64-bit integer that represents the number of milliseconds...".
Golang's time.Time type "represents an instant in time with nanosecond precision".
As such, if you compare these respective values as golang types you will only get equivalence if the golang Time has millisecond resolution (e.g. zeroes for micro- and nanosecond places).
For example:
gotime := time.Now() // Nanosecond precision
jstime := gotime.Truncate(time.Millisecond) // Milliseconds
gotime == jstime // => likely false (different precision)
isoMillis := "2006-01-02T15:04:05.000-0700Z"
gomillis := gotime.Format(isoMillis)
jsmillis := jstime.Format(isoMillis)
gomillis == jsmillis // => true (same precision)
I am contemplating the migration from Advantage Native Delphi components to FireDAC. I have been searching for a way to determine how with FireDAC I can determine the method that was used to connect to the server - Remote, Local, AIS (Internet).
I would be looking for the equivalent of TAdsConnection.ConnectionType.
Thanks
Gary Conley
The function you're looking for is called AdsGetConnectionType. Its import you can find declared in the FireDAC.Phys.ADSCli module, but it's not used anywhere.
But it's not so difficult to get its address and call it by yourself. For example (not a good one):
uses
FireDAC.Stan.Consts, FireDAC.Phys.ADSCli, FireDAC.Phys.ADSWrapper;
var
FTAdsGetConnectionType: TAdsGetConnectionType = nil;
type
TADSLib = class(FireDAC.Phys.ADSWrapper.TADSLib)
end;
function GetConnectionType(Connection: TFDConnection): Word;
const
AdsGetConnectionTypeName = 'AdsGetConnectionType';
var
CliLib: TADSLib;
CliCon: TADSConnection;
Status: UNSIGNED32;
Output: UNSIGNED16;
begin
Result := 0;
CliCon := TADSConnection(Connection.CliObj);
CliLib := TADSLib(CliCon.Lib);
if not Assigned(FTAdsGetConnectionType) then
FTAdsGetConnectionType := CliLib.GetProc(AdsGetConnectionTypeName);
if Assigned(FTAdsGetConnectionType) then
begin
Status := FTAdsGetConnectionType(CliCon.Handle, #Output);
if Status = AE_SUCCESS then
Result := Word(Output)
else
FDException(CliLib.OwningObj, EADSNativeException.Create(Status, CliLib, nil),
{$IFDEF FireDAC_Monitor}True{$ELSE}False{$ENDIF});
end
else
FDException(CliLib.OwningObj, [S_FD_LPhys, CliLib.DriverID],
er_FD_AccCantGetLibraryEntry, [AdsGetConnectionTypeName]);
end;
Possible usage:
case GetConnectionType(FDConnection1) of
ADS_AIS_SERVER: ShowMessage('AIS server');
ADS_LOCAL_SERVER: ShowMessage('Local server');
ADS_REMOTE_SERVER: ShowMessage('Remove server');
end;
Iteration to datastore query result in GAE/Go is very slow.
q := datastore.NewQuery("MyStruct")
gaeLog.Infof(ctx, "run") // (1)
it := client.Run(ctx, q)
list := make([]MyStruct, 0, 10000)
gaeLog.Infof(ctx, "start mapping") // (2)
for {
var m MyStruct
_, err := it.Next(&m)
if err == iterator.Done {
break
}
if err != nil {
gaeLog.Errorf(ctx, "datastore read error : %s ", err.Error())
<some error handling>
break
}
list = append(list , m)
}
gaeLog.Infof(ctx, "end mapping. count : %d", len(list)) // (3)
The result is below.
18:02:11.283 run // (1)
18:02:11.291 start mapping // (2)
18:02:15.741 end mapping. count : 2400 // (3)
It takes about 4.5 seconds between (2) and (3), just only 2400 record. It is very slow.
How can I improve performance?
[Update]
I added the query in above code q := datastore.NewQuery("MyStruct").
I tried to retrieve all the entities in the kind MyStruct. This kind has 2400 entities.
I was using cloud.google.com/go/datastore and found it is slow. I migrated to use google.golang.org/appengine/datastore.
The result is as follows, less than 1 second.
13:57:46.216 run
13:57:46.367 start mapping
13:57:47.063 end mapping. count : 2400