I am trying to build an open-source python code hosted at GAE to sync contacts by group to a limited number of users. In a web interface users will be able to pick their group and whom it will be synced with.
I understand there is a lot of applications on market place withe the same functionality, but my organization is concerned about those provides selling contacts to 3rd parties. We are a non-profit organization, so the code could be hosted at google project or github for community contribution.
(sorry for the long intro)
How is the best way to start? is there tutorial available with similar functionality that I can expand?
What is the best way to compare two Contact kind elements? To see if they need to be sync.
Is there a last update on the Contact kind elements? In case I want to implement a last update wins?
thanks!
I don't know of any tutorials for syncing and comparing contacts specifically, but there is a getting started guide for the Google Contacts API at https://developers.google.com/google-apps/contacts/v3/.
The contacts are sent as XML blobs, so you could compare two contacts by parsing them and looking at the individual elements within them. I don't think there's a better way to do this but there are libraries to handle it for you.
There is a last updated field sent as part of the contacts when retrieving them with the API. It is an XML element labeled <updated>.
how are you getting different user's contacts feeds?
i tried to save the tokens in the datastore when the
users grant the access, but when i get the token back
from datastore for 2 users at a time, after an hour
when the token expires,
all tokens start working like the current users token
and i can only get current users contacts.
token = Get_Shared_User_Token(user_email)
contact_client = gdata.contacts.client.ContactsClient(source=USER_AGENT)
authorized_client = token.authorize(contact_client)
contacts_feed = authorized_client.GetContacts(q = query)
can you please tell how one can get any user's contacts?
Related
We are a using a micro-service based pattern for our project where we have Users and their Orders. Users personal information (name, email, mobile) is stored in User table in relational database while we are storing Orders data of users in Orders collection in NoSql database. We want to develop an API to get a paginated list of all the orders placed with order details along with finer details of user associated like - user name, mobile, email along with each order. We are storing userId in Orders collection.
The problem is how do we get User details for each order in this list since both the resources are in different databases. We also thought of storing user name, email and mobile in Orders collection only but what if a user updates their profile, the Orders collection will have stale user data.
What is the best approach to address this issue?
You can use API gateway pattern, UI will call to API gateway endpoint and the Endpoint will call the both the API/services to get the result and aggregate it then returns aggregated response to the UI (caller)
https://microservices.io/patterns/apigateway.html
Well it mostly depends on scalability needs in terms of data size and number of requests. You may go with the API gateway if you don't have too much data and you don't get many requests to that service.
Otherwise if you really need something scalable then you should implement your own thought with an event based communication.
I already provided an answer for a similar situation you can take a look
https://stackoverflow.com/a/63957775/3719412
You have two services Orders and Users. You are requesting Orders service to get all Orders. It will return a response data which will contains ID of Users (each Order contains ID of User). Then, you will make a request to a Users service to get an information regarding User by ID which you got before. And finally, you can aggregate those results (if it is needed).
As guys mention, good solution will be to implement API Gateway here. As a client, you will send a request to a single port with endpoint (to a Gateway) and Gateway should create logic which I have described before.
I recently configured Service Cloud for my organization and for the most part it is working great. We do have an issue I can seem to solve though. The first time an internal employee submits a case (email-to-case), there is not a contact record to associate with the case, so the help desk must spend time creating a contact record for the employee before they can proceed with resolving the case. Is there a way to automate or sync with Office 365 to create a contact record for each employee?
I have considered and/tested the options below, but none do what I want:
I have already set up and tested Einstein Activity Capture. This is a great tool but will not sync internal (same email domain) contacts into Salesforce.
I know I can automate an export from Office 365 into an AWS S3 Bucket and then use AWS AppFlow to create/update contact records. This one is feasible, but I have to imaging there is a way to integrate the two platforms without exporting data into a staging area.
I can't use anything that requires manual intervention, like Data Loader.
Okay so I’m need guidance on where to start.
What I want to do is upon clicking a button in my web app which will be labelled “search” the web app will connect to my realism database and search the data base for the “search criteria” and the once found all matching cases it will create div blocks with the information inside it, in a list view and assign the ID of the div to the UID it gets back from the database.
database:
Users
--> Country
---->State
----->City
------>Post/ZipCode
------->UID
--------> Users informantion
Welcome to StackOverflow!
A great place to get started is the Firebase Realtime Database doocumentation or searching for Firecasts on YouTube (linked below).
As requested, here are some questions to ask yourself to get started and help scope out and define your new Firebase project.
What language are you going to use?
Are you planning on using any frameworks/libraries? e.g. For Javascript, these would include things like jQuery, Polymer, and React
What information are you storing in your database? e.g. user profiles, private user data/settings, public indexes, username lists, etc.
How is your database structured?
What data is being searched? The entire database? Values in a certain location?
What data needs to be displayed in your view?
Is the data accessible for just the current user or is it a public database that anyone can use?
What search criteria will be used? Is it just one filter at a time or many?
The answers to these questions aren't set in stone, but are to help you start thinking about the future of your project. They can be changed at any time as this isn't SQL where everything has to have its own schema.
If you intend on using "advanced searches" where you'll filter by multiple parameters at the same time, consider using Cloud Firestore instead.
I recommend looking at some Firecasts to help guide you through these questions. Here are some links to them:
Firebase YouTube Channel
Video: Getting Started with the Firebase Realtime Database on the Web
Playlist: Firebase on the Web
We now have one site running but we will need to build a branded site for our client soon. The client site will have exactly the same data set as our current site expect for the user data. The client site must have totally separated user info which allows only the client to use the site.
I don't see the need for setting up a new database or creating a new user table for the client. My tentative solution is add a "Company" column for the user table so that I can differ which site the user data row is on.
I do not know if this approach will work or not or if it is the best practice. Could anyone with experience like this shed some light on this question?
Thanks,
Nigong
P.S. I use LAMP with AWS.
Using an extra column to store a company / entity id is a common approach for multitenant system. In general you will want to abstract the part that that verifies you can only retrieve data you're allowed to a piece that all queries go through, like your ORM. This will prevent people new to the project from exposing/using data that shouldn't be exposed/used.
I have an Account entity that has a facebook id.
Sometimes, the client might send all facebook ids (the clients facebook friends) to the server.
We want to select all Accounts IN the facebook ids the client provided.
Looping and calling get on each facebook id seems rather slow, considering people might have 1000+ friends. Further more, GAE is limited to 30 queries with IN clause.
Has anyone had a similar situation? How did you handle it?
Thanks!
You can set up a model that uses the facebook ID as a key which allows you to use Model. get_by_key_name(key_names=fb_ids) to fetch all the models with keys in fb_ids at once.
e.g.
class FBModel(db.Model):
account = db.ReferenceProperty(reference_class=Account)
When creating the model:
model = FBModel(key_name=fb_id)