Read column from Datastore Amplify using react - reactjs

I am writing a react app using AWS amplify datastore library, I want to read a whole column and put it in the drop-down select menu. I have completed designing the UI but I don't how to get only one column rather than having the whole table
Currently my query statement looks something like this
await Datastore.query(myTable);
This returns the whole table. I want to know if I can get myTable.id where 'id' is column name

Because Amplify Datastore is already maintaining a local replica of your data, the recommendation from Samuel above is reasonable. Writing something like const ids = await Datastore.query(myTable).map(record => record.id) to get the id fields for all records in your table should work, and unless your data is really massive (in which case I'd imagine you're not trying to do that to fill a dropdown in UI) this should be reasonably fast.
The other alternative would be to query using the API category https://docs.amplify.aws/lib/graphqlapi/query-data/q/platform/js/ with a custom graphql doc that included only id in the selection set, but that's not really recommended, since you're going to need to hit your backend API unnecessarily in that case, rather than relying on the local data which Datastore maintains.

Related

How do I only return the data I need using GraphQL?

I am excited to start using GraphQL instead of REST but I still can't figure out how I would only return the data I need from a database without returning all the data every time.
Example:
I query the database for a user object that has 10 fields, I use graphql to return the entire object. Not a problem! But then I want to query that user object again using graphql but only return just one field. I know graphql can filter the data back out to the client but I would still need to query the database for the entire object.
Is there any way to make it only return back one field without having to return back the entire object?
This also depends on the Database you are using. If you are using mongodb it wouldn't bennefit using aggregate for this purpuse since those are heavy for db and is almost certainly it would be better to retrieve the full obj from the db and then let graphql filter what's needed by the client.
Also remember that GraphQL is intended to send the client only what's needed but it doesn't manage how you retrieve the information (it could be from a database, memory, static content, etc..)

Querying Salesforce Object Column Names w/SOQL

I am using the Salesforce SOQL snap in a SnapLogic integration between our Salesforce instance and an S3 bucket.
I am trying to use a SOQL query in the Salesforce SOQL snap field "SOQL query*" to return the column names of an object. For example, I want to run a SOQL query to return the column names of the "Account" object.
I am doing this because SOQL does not allow "Select *". I have seen code solutions in Apex for this, but I am looking for a way to do it using only a SOQL query.
You want to query metadata? Names of available tables, names of columns you can see in each table, maybe types instead of real Account/Contact/... data, correct?
You might have to bump the version of the API up a bit, current is 47 / 48 so some objects might not be visible in your current one. Also - what API options you have? SOAP, REST? Is "Tooling API" an option? Because it has very nice official FieldDefinition table to pull this.
It's not perfect but this could get you started:
SELECT EntityDefinition.QualifiedApiName, QualifiedApiName, DataType
FROM FieldDefinition
WHERE EntityDefinition.QualifiedApiName IN ('Account', 'Contact', 'myNamespace__myCustomObject__c')
I don't see the table in the REST API reference but it seems to query OK in Workbench so there's hope.
Generally try to Google around about EntityDefinition, FieldDefinition, EntityParticle... For example this is a decent shot at learning which tables are visible to you:
SELECT KeyPrefix, QualifiedApiName, Label, IsQueryable, IsDeprecatedAndHidden, IsCustomSetting
FROM EntityDefinition
WHERE IsCustomizable = true AND IsCustomSetting = false
Or in a pinch you could try to see which fields your user has permission to query. It's bit roundabout way to do it but I have no idea which tables your connector can "see".
Starting from API version 51.0 there's the FIELDS() function available: it lets you query all fields of a given object similar to "SELECT *"
Example:
SELECT FIELDS(ALL) FROM User LIMIT 200
Reference:
https://developer.salesforce.com/docs/atlas.en-us.soql_sosl.meta/soql_sosl/sforce_api_calls_soql_select_fields.htm

How to work with unsaved entities even though ID attribute is needed?

I'm creating a React application where my data has the following structure:
interface BookCase {
id: number;
bookShelves: BookShelf[];
}
interface BookShelf {
id: number;
}
Every bookcase and every bookshelf has an id property. I use this for the key attribute and for locating a bookshelf inside the bookShelves array. The id is generated in the backend by the database (With a BigSerial in PostgreSQL) on save.
I now want to create a new bookcase inside my frontend without immediately saving it to the backend. First I want to work with it, perform some operations on it (e.g. place a book on the shelf), and afterwards send the whole batch of changes with the new entities to the backend where it will then be persisted in the database.
Of course I do not yet have an id, although I need one to work on the bookcases. Should I rewrite my application to also accept null for id (I would prefer not to)? Should I just randomly create an temporary id, possibly having duplicates with the ids already present in the database (or for example use a negative value like -1)? Then I would need to replace all the ids afterwards after it has been saved to the database.
With UUIDs I could generate it on the frontend, but I guess there also has to be a common pattern to work with just incrementing integers as the id.
I do not think there is a clear answer here.
Essentially you have a object-relational mapping and there are various ways to handle it. Entity Framework for example just uses the default for the data type. So if the entity does not exist yet the ID will be 0 and any persisted entities have values starting at 1 so there are no conflicts.
One way i usually handle saving is by returning the updated record from the request, so you just replace your old one with that and you have the correct ID value applied automatically.

Is the database used by django subqueries sticky?

I am having a problem with django subqueries. When I fetch the original QuerySet , I specify the database that I need to use. My hunch is that the later subquery ends up using the 'default' database instead of what the parent query used.
My models approximately look like so (I have several):-
class Author(models.Model):
author_name=models.CharField(max_length=255)
author_address=models.CharField(max_length=255)
class Book(models.Model):
book_name=models.CharField(max_length=255)
author=models.ForeignKey(Author, null = True)
Now I fetch a QuerySet representing all books that are called Mark like so:-
b_det = Book.objects.using('some_db').filter(book_name = 'Mark')
Then later somewhere in the code I trigger a subquery by doing something like:-
if b_det:
auth_address = b_det[0].author.author_address
My problem is that arbitrarily in some cases , on my live server, the subquery fails even though there is valid data for that author's id. My suspicion is that the subquery is not making use of the same database 'some_db'. Is this possible? Is it so that the database that needs to be used is not sticky in subqueries? It is just a hunch that this might be a problem, it is happening in the context of a celery worker, is it possible that the combination of celery with django ORM has some bug?
I have solved this each time this occurred by doing a full fetch by invoking select_related like so.
b_det = Book.objects.using('some_db').select_related('author').filter(book_name = 'Mark')
So right now, the only way for me to solve the problems is determine beforehand all the data that I will need, and make sure that the top level fetch has all those inner model references using select_related. Any ideas why something like this would fail?
I am unable to recreate this locally else I would have debugged it. Like I said, it is pretty random.
Ok, I have an handle on this now. My assumption that the subqueries would remain sticky to the original database is wrong. What django does is that first it hits the database router that is configured. If that does not return anything only in that case it makes use of the original database.
So, if the configured database router returns some database to be used then that gets used. In my opinion this is wrong and we need to use the original database first and then check the database router.

What is the best method to exclude data and query parts of data in a Swift Firebase query?

I am querying users from Firebase and would like to know the best method to query all users excluding the current ref.authData.uid . In parse its read like this......
query?.whereKey("username", notEqualTo: PFUser.currentUser()!.username!)
query?.whereKey("username", containsString: self.searchTXTFLD.text)
Also, is there any Firebase query type similar to Parse's containsString?
There is no way to retrieve only items from Firebase that don't match a certain condition. You'll have to retrieve all items and exclude the offending ones client-side. Also see is it possible query data that are not equal to the specified condition?
There is also no contains operator for Firebase queries. This has been covered many times before, such as in this question: Firebase query - Find item with child that contains string

Resources