I am using React Native and Firebase Firestore. Below is my code where the error persists. It should display true, as the document does exist inside of the firestore, however it keeps returning false. I tried testing it with a value that I manually created on the Firestore website and the .exists property was true. When I create a document with .set however, it returns false. Anyone have an explanation or a solution as to why this could be happening? I have referred to other StackOverflow articles but they were not helpful as this does not seem to be happening to anyone else. Let me know if more info is needed. Thanks in advance.
export default class GuestSession extends Component {
state = {
isLoading: true,
users: [],
code: 0
}
constructor(props) {
super(props);
let displayName = firebase.auth().currentUser.displayName
this.state.code = props.route.params.code
const docRef = firebase.firestore().collection('sessions').doc(this.state.code)
docRef.get().then((docSnapshot) => {
if(docSnapshot.exists) {
console.log("exists")
} else {
console.log("doesn't exist")
}
})
this.state.isLoading = false
}
}
I have figured out my issue. When dynamically creating documents on the go, you want to make sure that you are creating a single document first, instead of a document that includes a collection with another document. For example,
//dynamic document creation of 2 separate documents
firebase.firestore().collection('sessions').doc(this.state.code)
.collection('users').doc('exampleUserDocument').set({username: username})
should instead be written as
const sessionRef = firebase.firestore().collection('sessions')
sessionRef.doc(this.state.code).set({exampleField: value})
//properly sets session document
sessionRef.doc(this.state.code).collection('users').doc('exampleUserDocument').set({username: username})
//properly sets user document
This allows for the document to be properly created, with all property values of a document to be set for both a session document, and for a user document. I feel like when you do the longer version stated first, it skips over properly creating a document and setting proper values in the session collection to save some loading time for the system. This is more of a hypothesis on what happens rather than a factual statement, but that is my assumption. I hope this helps others in the future.
I am working on a React frontend with a delete method which deletes an item in the database.
Having a hard time with the following code.
deleteFromDB = idTodelete => {
let objIdToDelete = null;
this.state.data.forEach(dat => {
if (dat.id == idTodelete) {
objIdToDelete = dat._id;
}
});
This is the method called after entering the ID to be deleted and this modifies the state by deleting the item corresponding to the ID
It's a naming convention for private variables and methods used by some developers to indicate that they are private.
Also see:
What is the underscore "_" in JavaScript?
It may be possible that _id is actually the primary key of the data you want to delete, as it is actually used in mongodb
I need unique records my Parse object but due to the 'saveinbackground' a simple find() on the client didn't do the job. Even adding and setting a boolean like bSavingInbackGround and skip an additional save if true wouldn't prevent my app from creating duplicates.
Ensure unique keys is obvious very helpfull in many (multi user) situations.
Parse CloudCode would be the right way but I didn't find a proper solution.
After doing some trail and error testing I finally got it to work using Cloud Code. Hope it helps someone else.
My 'table' is myObjectClass and the field that needs to be unique is 'myKey'.
Add this to main.js an upload to Parse Server Cloud Code.
Change myObjectClass and 'myKey' to suit your needs:
Parse.Cloud.beforeSave("myObjectClass", function(request, response) {
var myObject = request.object;
if (myObject.isNew()){ // only check new records, else its a update
var query = new Parse.Query("myObjectClass");
query.equalTo("MyKey",myObject.get("myKey"));
query.count({
success:function(number){ //record found, don't save
//sResult = number;
if (number > 0 ){
response.error("Record already exists");
} else {
response.success();
}
},
error:function(error){ // no record found -> save
response.success();
}
})
} else {
response.success();
}
});
Your approach is the correct approach, but from a performance point of view, I think using query.first() is faster than query.count() since query.first() will stop as soon as it finds a matching record, whereas query.count() will have to go through the whole class records to return matching the number of matching records, this can be costly if you have a huge class.
In an architecture where objects have many complex relationships, what are some maintainable approaches to dealing with
Resolving Dependencies
Optimistic Updates
in react applications?
For example, given this type of schema:
```
type Foo {
...
otherFooID: String,
bars: List<Bar>
}
type Bar {
...
bizID: String,
}
type Biz {
...
}
```
A user might want to save the following ->
firstBiz = Biz();
secondBiz = Biz();
firstFoo = Foo({bars: [Bar({biz: firstBiz})]
secondFoo = Foo({bars: [Bar({biz: secondBiz})] otherFooId: firstFooId.id})
First Problem: Choosing real ids
The first problem with above is having the correct id. i.e in order for secondFoo to save, it needs to know the actual id of firstFoo.
To solve this, we could make the tradeoff, of letting the client choose the id, using something like a uuid. I don't see anything terribly wrong this this, so we can say this can work
Second Problem: Saving in order
Even if we determine id's from the frontend, the server still needs to receive these save requests in order.
```
- save firstFoo
// okay. now firstFoo.id is valid
- save secondFoo
// okay, it was able to resolve otherFooID to firstFoo
```
The reasoning here is that the backend must guarantee that any id that is being referenced is valid.
```
- save secondFoo
// backend throws an error otherFooId is invalid
- save firstfoo
// okay
```
I am unsure what the best way to attack this problem is
The current approaches that come to mind
Have custom actions, that do the coordination via promises
save(biz).then(_ => save(Bar).then(_ => save(firstFoo)).then(_ => save(second)
The downside here is that it is quite complex, and the number of these kinds of combinations will continue to grow
Create a pending / resolve helper
const pending = {}
const resolve = (obj, refFn) => {
return Promise.all(obj, refFn(obj));
}
const fooRefs = (foo) => {
return foo.bars.map(bar => bar.id).concat(foo.otherFooId);
}
pending[firstFoo].id = resolve(firstFoo, fooRefs).then(_ => save(firstFoo))
```
The problem with 2. is that it can cause a bunch of errors easily, if we forget to resolve or to add to pending.
Potential Solutions
It seems like Relay or Om next can solve these issues, but i would like something less high power. Perhaps something that can work in with redux, or maybe it's some concept I am missing.
Thoughts much appreciated
I have a JS/PHP implementation of such a system
My approach is to serialize records both on the client and server using a reference system
For example unsaved Foo1 has GUID eeffa3, and a second Foo references its id key as {otherFooId: '#Foo#eeffa3[id]' }
Similarily you can reference a whole object like this
Foo#eefa3:{bars['#Baz#ffg4', '#Baz#ffg5']}
Now the client-side serializer would build a tree of relations and model attributes like this
{
modelsToSave:[
'Foo#effe3':{
attribs:{name:'John', title:'Mr.'},
relations:{bars:['#Bar#ffg4']}
},
'Bar#ffg4':{
attribs:{id:5}
relations:{parentFoo:'#Foo#effe3'}
},
]
}
As you can see in this example I have described circular relations between unsaved objects in pure JSON.
The key here is to hold these "record" objects in client-side memory and never mutate their GUID
The server can figure out the order of saving by saving first records without "parent" dependencies, then records which depend on those parents
After saving, the server wil return the same reference map, but now the attribs will also include primary keys and foreign keys
JS walks the received map twice (first pass just update server-received attributes, second pass substitute record references and attribute references to real records and attributes).
So there are 2 mechanisms for referencing a record, a client-side GUID and a server-side PK
When receiving a server JSON, you match your GUID with the server primary key
In my db scheme, I need a autoincrement primary key. How I can realize this feature?
PS For access to DynamoDB, I use dynode, module for Node.js.
Disclaimer: I am the maintainer of the Dynamodb-mapper project
Intuitive workflow of an auto-increment key:
get the last counter position
add 1
use the new number as the index of the object
save the new counter value
save the object
This is just to explain the underlying idea. Never do it this way because it's not atomic. Under certain workload, you may allocate the same ID to 2+ different objects because it's not atomic. This would result in a data loss.
The solution is to use the atomic ADD operation along with ALL_NEW of UpdateItem:
atomically generate an ID
use the new number as the index of the object
save the object
In the worst case scenario, the application crashes before the object is saved but never risk to allocate the same ID twice.
There is one remaining problem: where to store the last ID value ? We chose:
{
"hash_key"=-1, #0 was judged too risky as it is the default value for integers.
"__max_hash_key__y"=N
}
Of course, to work reliably, all applications inserting data MUST be aware of this system otherwise you might (again) overwrite data.
the last step is to automate the process. For example:
When hash_key is 0:
atomically_allocate_ID()
actual_save()
For implementation details (Python, sorry), see https://bitbucket.org/Ludia/dynamodb-mapper/src/8173d0e8b55d/dynamodb_mapper/model.py#cl-67
To tell you the truth, my company does not use it in production because, most of the time it is better to find another key like, for the user, an ID, for a transaction, a datetime, ...
I wrote some examples in dynamodb-mapper's documentation and it can easily be extrapolate to Node.JS
If you have any question, feel free to ask.
Another approach is to use a UUID generator for primary keys, as these are highly unlikely to clash.
IMO you are more likely to experience errors consolidating primary key counters across highly available DynamoDB tables than from clashes in generated UUIDs.
For example, in Node:
npm install uuid
var uuid = require('uuid');
// Generate a v1 (time-based) id
uuid.v1(); // -> '6c84fb90-12c4-11e1-840d-7b25c5ee775a'
// Generate a v4 (random) id
uuid.v4(); // -> '110ec58a-a0f2-4ac4-8393-c866d813b8d1'
Taken from SO answer.
If you're okay with gaps in your incrementing id, and you're okay with it only roughly corresponding to the order in which the rows were added, you can roll your own: Create a separate table called NextIdTable, with one primary key (numeric), call it Counter.
Each time you want to generate a new id, you would do the following:
Do a GetItem on NextIdTable to read the current value of Counter --> curValue
Do a PutItem on NextIdTable to set the value of Counter to curValue + 1. Make this a conditional PutItem so that it will fail if the value of Counter has changed.
If that conditional PutItem failed, it means someone else was doing this at the same time as you were. Start over.
If it succeeded, then curValue is your new unique ID.
Of course, if your process crashes before actually applying that ID anywhere, you'll "leak" it and have a gap in your sequence of IDs. And if you're doing this concurrently with some other process, one of you will get value 39 and one of you will get value 40, and there are no guarantees about which order they will actually be applied in your data table; the guy who got 40 might write it before the guy who got 39. But it does give you a rough ordering.
Parameters for a conditional PutItem in node.js are detailed here. http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/frames.html#!AWS/DynamoDB.html. If you had previously read a value of 38 from Counter, your conditional PutItem request might look like this.
var conditionalPutParams = {
TableName: 'NextIdTable',
Item: {
Counter: {
N: '39'
}
},
Expected: {
Counter: {
AttributeValueList: [
{
N: '38'
}
],
ComparisonOperator: 'EQ'
}
}
};
For those coding in Java, DynamoDBMapper can now generate unique UUIDs on your behalf.
DynamoDBAutoGeneratedKey
Marks a partition key or sort key property as being auto-generated.
DynamoDBMapper will generate a random UUID when saving these
attributes. Only String properties can be marked as auto-generated
keys.
Use the DynamoDBAutoGeneratedKey annotation like this
#DynamoDBTable(tableName="AutoGeneratedKeysExample")
public class AutoGeneratedKeys {
private String id;
#DynamoDBHashKey(attributeName = "Id")
#DynamoDBAutoGeneratedKey
public String getId() { return id; }
public void setId(String id) { this.id = id; }
As you can see in the example above, you can apply both the DynamoDBAutoGeneratedKey and DynamoDBHashKey annotation to the same attribute to generate a unique hash key.
Addition to #yadutaf's answer
AWS supports Atomic Counters.
Create a separate table (order_id) with a row holding the latest order_number:
+----+--------------+
| id | order_number |
+----+--------------+
| 0 | 5000 |
+----+--------------+
This will allow to increment order_number by 1 and get the incremented result in a callback from AWS DynamoDB:
config={
region: 'us-east-1',
endpoint: "http://localhost:8000"
};
const docClient = new AWS.DynamoDB.DocumentClient(config);
let param = {
TableName: 'order_id',
Key: {
"id": 0
},
UpdateExpression: "set order_number = order_number + :val",
ExpressionAttributeValues:{
":val": 1
},
ReturnValues: "UPDATED_NEW"
};
docClient.update(params, function(err, data) {
if (err) {
console.log("Unable to update the table. Error JSON:", JSON.stringify(err, null, 2));
} else {
console.log(data);
console.log(data.Attributes.order_number); // <= here is our incremented result
}
});
🛈 Be aware that in some rare cases their might be problems with the connection between your caller point and AWS API. It will result in the dynamodb row being incremented, while you will get a connection error. Thus, there might appear some unused incremented values.
You can use incremented data.Attributes.order_number in your table, e.g. to insert {id: data.Attributes.order_number, otherfields:{}} into order table.
I don't believe it is possible to to a SQL style auto-increment because the tables are partitioned across multiple machines. I generate my own UUID in PHP which does the job, I'm sure you could come up with something similar like this in javascript.
I've had the same problem and created a small web service just for this purpose. See this blog post, that explains how I'm using stateful.co with DynamoDB in order to simulate auto-increment functionality: http://www.yegor256.com/2014/05/18/cloud-autoincrement-counters.html
Basically, you register an atomic counter at stateful.co and increment it every time you need a new value, through RESTful API. The service is free.
Auto Increment is not good from performance perspective as it will overload specific shards while keeping others idle, It doesn't make even distribution if you're storing data to Dynamodb.
awsRequestId looks like its actually V.4 UUID (Random), code snippet below to try it:
exports.handler = function(event, context, callback) {
console.log('remaining time =', context.getRemainingTimeInMillis());
console.log('functionName =', context.functionName);
console.log('AWSrequestID =', context.awsRequestId);
callback(null, context.functionName);
};
In case you want to generate this yourself, you can use https://www.npmjs.com/package/uuid or Ulide to generate different versions of UUID based on RFC-4122
V1 (timestamp based)
V3 (Namespace)
V4 (Random)
For Go developers, you can use these packages from Google's UUID, Pborman, or Satori. Pborman is better in performance, check these articles and benchmarks for more details.
More Info on Universal Unique Identifier Specification could be found here.
Create the new file.js and put this code:
exports.guid = function () {
function _p8(s) {
var p = (Math.random().toString(16)+"000000000").substr(2,8);
return s ? "-" + p.substr(0,4) + "-" + p.substr(4,4) : p ;
}
return (_p8() + _p8(true) + _p8(true)+new Date().toISOString().slice(0,10)).replace(/-/g,"");
}
Then you can apply this function to the primary key id. It will generate the UUID.
Incase you are using NoSQL DynamoDB then using Dynamoose ORM, you can easily set default unique id. Here is the simple user creation example
// User.modal.js
const dynamoose = require("dynamoose");
const userSchema = new dynamoose.Schema(
{
id: {
type: String,
hashKey: true,
},
displayName: String,
firstName: String,
lastName: String,
},
{ timestamps: true },
);
const User = dynamoose.model("User", userSchema);
module.exports = User;
// User.controller.js
const { v4: uuidv4 } = require("uuid");
const User = require("./user.model");
exports.create = async (req, res) => {
const user = new User({ id: uuidv4(), ...req.body }); // set unique id
const [err, response] = await to(user.save());
if (err) {
return badRes(res, err);
}
return goodRes(res, reponse);
};
Instead of using UUID use KSUID for ids. Naturally ordered by generation time.
https://www.npmjs.com/package/ksuid?activeTab=readme