how to communicate between two clients using channel API? - google-app-engine

I've successfully implemented the channel api to create the connection between browser and app engine server.
I want to ask what will be the way to send message from the second client to the first client.
I'm not getting the actual algorithm.

The client_id you used to create the connection to the app engine server is what you need to send a message to another client_id. Either persist this on datastore or it is buildable by their ID but you would still need some sort of way to know what other client_id is for example:
Create a room:
room = models.Room(user=user_id)
room.put()
token = channel.create_channel(room.key.id() + user_id)
Other one joins the room:
room = models.Room.query().get()
room.another_user = user_id
room.put()
token = channel.create_channel(room.key.id() + user_id)
Then pass room id and token for reference on your js to send message:
room = models.Room.get_by_id(room_id)
send_to = room.user if room.user != user_id else room.other_user
channel.send_message(room.key.id() + send_to, message)
Note that user_id on each sample is currently connected user.

Related

Shopify data pulling into own database

we are working on a datawarehouse project and need to pull data from our customerĀ“s shopify database.
We want to analyse inventory stock quantity and stock value, also do market analysis, traffic analysis, top landing pages, conversions, revenues and others.
Shopify offers REST APIs for data pulling but not all our requirements can be fulfilled.
Our question are wether
there is a plan we can order, like shopify partner plus or AdminAPI that allows data pulling all data fields?
are there experiences in data pulling from shopify, deeper than the published REST API queries?
I am new to shopify requirements, thatĀ“s why my questions.
I am not sure that if it can solve your problem. Currently, our company uses an online tool called Acho Studio to retrieve Shopify data. It's like ETL tools that allow you to connect to Shopify and export data to somewhere. The difference is that the tool hosts a sever, so you can store data on it. Also, you can write SQL queries or apply some actions to do analysis.
This is the codes snippet i am using
# Prepare headers
headers = {}
headers["Content-Type"] = "application/json"
headers["X-Shopify-Access-Token"] = f"{access_token}"
# Define the shopify url
shop_host = f'https://{hostName}.myshopify.com'
orderEndpoint = f'{order_endpoint}'
url = f'{shop_host}/{orderEndpoint}'
params = {'status': 'any',
'limit': 250,
'created_at_min': '2022-01-01', 'created_at_max' : '2022-06-20'}
response = session.get(url=url, headers=headers, params=params)
orders = response.json()['orders'] # order data in json format
while 'next' in response.links:
try:
session = requests.Session()
response = requests.get(response.links['next']['url'], headers=headers)
orders.extend(response.json()['orders'])
except KeyError as e:
if response.links == 'next':
print('go next page')
else:
print('No more pages')
I am able to pull the Order data and other data such as refund, .etc

How do you pull all records from a paginated WebAPI into a SQL Server database in ASP.NET Core MVC?

My requirement is to pull data from an external api, first call returns only 100 records, but with a header information stating total pages and total records in remote database. I need to pull all this records at once and insert into my database, next call to the api should only pull new records in the remote database.
I am working with ASP.NET Core 3.0 and a SQL Server database.
public void GetReservation(int? pageNumber, int? pageSize)
{
using (HttpClient client = new HttpClient())
{
client.BaseAddress = new Uri("https://www.sitename.com/url");
MediaTypeWithQualityHeaderValue contentType = new
MediaTypeWithQualityHeaderValue("application/json");
client.DefaultRequestHeaders.Accept.Add(contentType);
HttpResponseMessage response = client.GetAsync($"/api/serviceurl?pageNumber=
{pageNumber}&pageSize={pageSize}").Result;
string stringData = response.Content.ReadAsStringAsync().Result;
List<Reservations> data = JsonConvert.DeserializeObject<List<Reservations>>(stringData);
var headerInfo = response.Headers.GetValues("X-Pagination").First();
XPaginationObject obj= JsonConvert.DeserializeObject<XPagination>(headerInfo);
// Insert into database code only enters first 100 page
}
}
headerinfo contains totalpages, totalrecords, currentpage, hasNext, hasprevious...
It looks to me you're almost there: just run this method in a loop until you have all the records.. But first you need to get the total number of pages.
What I would do is:
1: call the api with pagenumber 1 and size 0 so you receive a header.
2: Get the info from the header and loop over pages until you are done.
3. You will have to write your own logic for only getting the new reservations, for instance, store the last received page and record number so you can skip these the next time.
Does this answer your question?
P.S.: It could very well be possible that your data provider only allows getting 100 rows at a time. If this is the case you will have to loop over 100-record pages until you have received all the pages.

Gmail REST API Thread Search not Giving Expected Results

We have built an Email Audit Application for one of our customers. The app utilizes the Gmail REST API and provides a front-end interface that allows users with permission to run audit queries based on a selected date-ranges. The date-range that is provided by the user is utilized in the Email Thread search query.
We have noticed, however, that the API is showing a discrepancy between the threads that are returned and the actual items that are in the inbox of an audited user. For example, in order to collect everything within 1 day, say 4/28, we need to expand the audit range from 4/27-4/29.
The documentation for the Gmail REST API provides no explanation nor highlighting of this behavior. Is this an issue with the API or are there additional parameters that perhaps can specify the time-zone for which we can search for these email threads?
Below you will find a snippet of code that is utilized to grab such email threads:
def GrabAllThreadIDs(user_email, after_date, before_date):
query = "in:inbox " + "after:" + after_date + " " + "before:" + before_date
# Create the Gmail Service
gmail_service = create_gmail_service(user_email)
raw_thread_response = ListThreadsMatchingQuery(gmail_service, 'me', query)
for item in raw_thread_response:
all_ids.append(item['id'])
return all_ids
======================================================
def ListThreadsMatchingQuery(service, user_id, query=''):
"""List all Threads of the user's mailbox matching the query.
Args:
service: Authorized Gmail API service instance.
user_id: User's email address. The special value "me"
can be used to indicate the authenticated user.
query: String used to filter messages returned.
Eg.- 'label:UNREAD' for unread messages only.
Returns:
List of threads that match the criteria of the query. Note that the returned
list contains Thread IDs, you must use get with the appropriate
ID to get the details for a Thread.
"""
try:
response = service.users().threads().list(userId=user_id, q=query).execute()
threads = []
if 'threads' in response:
threads.extend(response['threads'])
while 'nextPageToken' in response:
page_token = response['nextPageToken']
response = service.users().threads().list(userId=user_id, q=query,
pageToken=page_token).execute()
threads.extend(response['threads'])
return threads
except errors.HttpError, error:
print 'An error occurred: %s' % error
======================================================
That is how the Advanced search is designed. after gives messages sent after 12:00 AM (or 00:00), and before gives messages before the given date. Asking for after:2015/04/28 and before:2015/04/28 would result in a non-existent timespan.
I like to use the alternate form after:<TIME_IN_SECONDS_SINCE_THE_EPOCH>. If you would like to get all the messages received on 2015/04/28 you would write after:1430172000 before:1430258399 (2015/04/28 00:00 to 2015/04/28 23:59:59)

google app engine: concurrent user registrations

I know this is a classical problem, but I still don't know how to do it. On Google App Engine, I have a member registration form which uses jQuery's validation to check if a username exists.
There of course is a concurrency problem: several users try to register, enter the same username, Validation finds the username available, and allow them to press "Add" at the approximately same time. Validation wouldn't detect this. In my application, username, email, and Personal ID should all be unique. How do I prevent the following code from having the concurrency problem:
member = Member()
member.username = self.request.get('username')
member.Pid = self.request.get('Pid')
member.email = self.request.get('email')
...
As the uniqueness constraint is on username, you have to use it as key in datastore and use transactions.
def txn():
key = ndb.Key(Member, username)
member = key.get()
if member is not None:
raise CustomAlreadyExistsException(member) # This will abort txn
member = Member(
id=username,
Pid=self.request.get('Pid'),
email=self.request.get('email'),
...)
member.put()
ndb.transaction(txn)
This makes sure only one person can register a username.
The jQuery helper would check if ndb.Key(Member, userid).get() gives a result or not. The GET is not transactional.
To improve usability client side in "reserving" a username after checking availability, you could use memcached as suggested by Daniel, but I'd call YAGNI, skip the complexity and rather let some people get validation error after submitting the form. Note that memcached is best effort and has no guarantees about anything.
If you need guaranteed uniqueness on multiple fields, you have to add Model classes for them and check in a cross group (XG) transaction.
class Pid(ndb.Model):
member = ndb.KeyProperty()
class Email(ndb.Model):
member = ndb.KeyProperty()
class Member(ndb.Model):
pid = ndb.KeyProperty()
email = ndb.KeyProperty()
#property
def pid_value(self):
return self.pid.id()
#property
def email_value(self):
return self.email.id()
def txn():
member_key = ndb.Key(Member, username)
pid_key = ndb.Key(PersonalId, self.request.get('Pid'))
email_key = ndb.Key(Email, self.request.get('email'))
member, pid, email = ndb.get_multi([member_key, pid_key, email_key])
if member is not None or pid is not None or email is not None:
raise CustomAlreadyExistsException(member, pid, email) # This will abort txn
# Create instances referencing each other
email = Email(key=email_key, member=member_key)
pid = Pid(key=pid_key, member=member_key)
member = Member(
key=member_key,
pid=pid_key,
email=email_key,
...)
ndb.put_multi([member, pid, email])
ndb.transaction(txn, xg=True)
This is a great use for memcache. Your Ajax validation function should put an entry into memcache to record that the username has been requested. It should also check both memcache and the datastore to ensure that the username is free. Similarly, the registration code should check memcache to ensure that the current user is the one who requested the username.
This nicely solves your concurrency problem, and the best thing is that entries in memcache expire by themselves, either on a timed basis or when the cache gets too full.
I agreed with tesdal.
If you still want to implement the memcache tric sugested by Daniel, you should do something like "memcache.add(usernameA, dummy value, short period);". So you know that usernameA is reserved for a short period and wont conflict with "memcache.add(usernameB, ..."

GAE Datastore ID

I've created two different Entities, one a User and one a Message they can create. I assign each user an ID and then want to assign this ID to each message which that user creates. How can I go about this? Do I have to do it in a query?
Thanks
Assuming that you are using Python NDB, you can having something like the following:
class User(ndb.Model):
# put your fileds here
class Message(ndb.Model):
owner = ndb.KeyProperty()
# other fields
Create and save a User:
user = User(field1=value1, ....)
user.put()
Create and save a Message:
message = Message(owner=user.key, ...)
message.put()
Query a message based on user:
messages = Message.query().filter(Message.owner==user.key).fetch() # returns a list of messages that have this owner
For more information about NDB, take a look at Python NDB API.
Also, you should take a look at Python Datastore in order to get a better understanding of data modeling in App Engine.

Resources